Tech leaders highlight military AI and 5G investments they call essential to U.S economy and national security

Some of the biggest names in AI at companies like Amazon, Google, and Microsoft are making recommendations about how the U.S. military and federal government should fund 5G adoption and AI initiatives as part of the forward-looking work of the National Security Council on AI (NSCAI).

The NSCAI gives a select group of tech executives agency to make recommendations with the power to affect not just military policy, but, with public-private partnerships, their own businesses as well.

Congress formed the independent NSCAI Commission as part of the 2019 military budget to advise on matters at the intersection of AI and national defense. Commissioners include AWS CEO Andy Jassy, Microsoft chief science officer Eric Horvitz, former Google CEO Eric Schmidt, and Google Cloud AI chief Andrew Moore.

The report makes more than two dozen preliminary judgments ranging from modifications to a military entrance exam to discovering AI talent to creating a national 5G strategy within 6 months as part of the next annual defense spending bill.

“It is a national security imperative for the U.S. military to have access to a powerful 5G network to enable future AI capabilities, and ensure the network is trusted to prevent competitors from accessing our AI systems,” the report reads.

The report frames 5G as an adjacent technology to AI and ML. For that same reason, biotechnology and quantum recommendations will also be part of the final report, which is due out in March 2021. But this quarterly report includes recommendations on issues like 5G that NSCAI commissioners believe deserve action now, and it underscores the importance of doing so.

The report also calls for an expansion of 5G spectrum sharing between private businesses and the U.S. military, and for Congress to pass the $750 million USA Telecommunications Act to encourage 5G research and development and open up access to radio networks. Quick action is called for, the NSCAI Commission said, to compete with Huawei.

“5G networks will form the connective tissue between AI platforms. Ensuring the United States maintains access to trusted and robust 5G networks is a critical component of overall leadership in AI. This is particularly true as microelectronics continue to advance, and the capability to run sophisticated AI models at the edge will increase. As AI becomes more dispersed throughout the network, the need for a secure and effective 5G network will increase even more,” the Commission said in the report.

The report also emphasizes the need for military and federal government funding to produce, assemble, and test semiconductor hardware like FPGAs, GPUs, and ASIC chips. The Commission prescribes simultaneous creation of government pathways to trusted, state-of-the-art integrated circuit chips and continued investment in microelectronics to ensure progress despite the slowdown of Moore’s law. To accomplish this, the Commission says Congress should invest $500 million in DARPA’s Electronics Resurgence Initiative and $100 million in the U.S. Navy’s Trusted and Assured Microelectronics program and State-of-the-Art Heterogeneously Integrated Packaging (SHIP).

“At present, the U.S. government does not have trusted access to state-of-the-art microelectronics manufacturing,” the report reads. “With an additional $50 million, SHIP could expand the existing pilot prototype program to include heterogeneous integration of multi-chip packages incorporating AI specific chips and configurations.”

The report acknowledges that the United States has enjoyed a strategic advantage in chips since the field was created decades ago but says that advantage is eroding, and that risk related to the semiconductor supply chain in the U.S. is on the rise.

Related to the Navy’s trusted microelectronics programs, the report states, the Office of the Director of National Intelligence (ODNI) has assessed ways private semiconductor companies can work with the government to establish state-of-the-art semiconductor design.

“While building a cutting-edge, high-capacity semiconductor fabrication plant for dedicated-government use would likely cost approximately $20 billion, the ODNI approach calls for a security-based split-manufacturing facility and partnering with a private sector firm to build a facility, which would produce both commercial use and government-use chips,” the report reads.

Leading up to a final report scheduled to be sent to Congress next year, the NSCAI Commission said this week it now plans to release quarterly reports on how the U.S. military and intelligence agencies can prepare for a future of algorithmic warfare or otherwise use AI for their purposes. A draft NSCAI draft report released last fall asserted that AI supremacy is essential to U.S. national security and economic might.

Virtually all recommendations in this week’s report are intended for Congress or the executive branch for reprogramming 2020 spending or allocation in the 2021 budget. Other recommendations in the report include:

  • Modify the Armed Services Vocational Aptitude Battery (ASVAB) test, an entrance exam for military recruits, to seek out people with “computational thinking” — that is, people with minds that define a problem, create models to solve the problem, then iterate. The same test should be extended to civilian DoD employees to find AI talent.
  • Establish AI ethics training for the DoD as well as the Department of Homeland Security, FBI, and intelligence agencies. This training should later be shared with state and local law enforcement agencies. The DoD should also create an expert panel to advise the federal government on AI ethics issues. This week’s report comes after DoD issued recommendations on AI ethics principles in February.
  • Establish general AI training for government procurement officials and HR professionals who are hiring software developers, data scientists, or AI practitioners.
  • Create unclassified workspaces so recent hires who do not yet have a classified security clearance can stay busy.
  • Launch a task force study and pilot program to establish a National AI Research Resource for research.
  • Establish deeper AI collaborations with Australia, Canada, New Zealand, and the United Kingdom.
  • Hire more university professors as part-time government researchers in order to attract young talent.

On matters of privacy and ethics, the Commission singles out Clearview AI, a company that scraped billions of images from the web to create a facial recognition system, as an example of invasive AI-powered tech that should be avoided. Tech companies that track the location of individuals — including the employers of some commission members — was also mentioned as problematic.

“These developments only confirm that we need to develop best practices, policies, and laws aimed at ensuring the responsible development and fielding of AI-enabled systems and tools consistent with democratic norms and values,” the report reads.

The Commissioners, and a staff of experts advising their work, said they switched to a quarterly approach in order to inform elected officials like the president and Congress about issues that deserve near-term or immediate action, like 5G competition with China.

“The NSCAI is on track to submit its final report in March 2021. However, the pace of AI development, the geopolitical situation, and the relevant authorization and budget timelines in 2020 represent important opportunities for the Commission to contribute to ongoing efforts to foster research and development, accelerate AI applications, and responsibly grapple with the implications of AI for our security, economy, and society.”

The report makes a single mention of coronavirus, citing the global pandemic as a reason the Commission must remain flexible and act fast. Like AI, defense officials have referred to COVID-19 as a national security threat.

The Commission will share a series of recommendations deemed classified with executive and legislative branches of government related to specific threats to the United States from foreign state and non-state actors. The laundry list of funding requests in the report could face some obstacles in the future. As the U.S. and global economy continue to falter, economists expect that a recession in the months ahead may lead to the largest reduction in U.S. GDP production since World War II.

Source: Read Full Article


Surveillance, AI, and saving lives top agenda at coronavirus conference

The Stanford Institute of Human-Centered AI (HAI) hosted a conference to discuss applications of AI that governments, technologists, and public health officials are using to save human lives during the global coronavirus pandemic. Based on current mitigation efforts, the U.S. — which has more cases than any other country on Earth — could see 100,000 to 240,000 deaths over the coming months, according to a White House estimate released earlier this week.

Also prevalent on the minds of experts throughout the day-long digital event Wednesday was the level of surveillance that’s necessary, or tolerable, for tracking people with confirmed cases of COVID-19.

“I think on the technology side we can have pre-thought, emergency assignments that enable the administration in any pandemic situation; when the legislature agrees, they can have infrastructures such as those at tech companies at their disposal, because if a governor is allowed to legally command Sheraton to share its hotel beds as hospitals, I believe the governor should be allowed to command a tech company to share their tracking infrastructure for public health for a limited period of time,” said Dr. Nigam Shah, whose work at Stanford focuses on machine learning models that predict things like available hospital beds.

Stanford University professor and Center for Innovation in Global Health director Dr. Michele Barry gave a snapshot of how countries around the world have responded to COVID-19, sometimes in ways that may be considered invasive in the United States.

“I think what we’re all worried about in some of the more authoritarian countries … is that it’s often hard to unroll strict laws that are made in the time of public health emergencies. It’s very easy to roll them out, but we’re all looking with bated breath about what’s going to happen in the future,” Barry said.

Topics on the agenda at the conference ranged from AI’s role in the hunt for a vaccine to misinformation to bioweapons to applying AI to millions of tweets to understand the psychological toll of social distancing and rampant unemployment in the past month. Currently, the majority of U.S. citizens, and more than one in five people around the world, are practicing social distancing.

Public health and government officials are actively considering surveillance, quick tech builds, and ways to accelerate research or gain predictive insights. Last week, WHO executive director Dr. Michael Ryan said surveillance is part of what’s required for life to return to normal in a world without a vaccine. Countries hit hard by coronavirus in the past few months have adopted smartphone apps or other means of tracking the movement of people diagnosed with COVID-19, such as wearable wristbands.

Stanford HAI codirector Dr. Fei-Fei Li described one solution for tracking. It’s an AI-powered, in-home COVID-19 monitoring and tracking system designed for seniors who live alone, and for the people who care for them but who have to minimize contact to reduce the risk of infection. The system is built with cameras and smart sensors that track biometric information like temperature, telltale body movements, and sleep patterns. It sends the data to a secure central server, where AI models search for clinically relevant patterns. Then, the system would alert caregivers to any pertinent results — perhaps via an app.

Tina White, a Ph.D. candidate at Stanford University, said oppressive surveillance she read about in February concerning China’s coronavirus app inspired her to action. The COVID Watch app for iOS and Android smartphones uses the Bluetooth protocol for measuring close contact between individuals. Privacy advocates prefer Bluetooth because it’s a decentralized way to share information locally on smartphone devices and could be more effective than GPS at tracking movement indoors, where people are more likely to transmit the virus. When a person receives a positive test, they share their contact number in the app, and anyone they came in close proximity to receives a notification.

The COVID Watch app open-sources its code to give people in other countries the choice to make their own version. Researchers from eight European countries also recently open-sourced PEPP-PT, the Pan-European Privacy-Preserving Proximity Tracing, which also uses Bluetooth.

“Our approach was designed so you don’t have to collect identifying information about an individual in order to use it. So our data set is kind of special because there’s no actual identifying information, it’s just these anonymous contact event numbers which are then deleted after a couple weeks and are only interpretable by the individual phones,” White said. “So it’s only an intervention and not something that generates a data set, which is kind of what we were aiming for — something that’s so private that we can’t even interpret it ourselves. It’s only interpretable by the phones that were locally involved in the contact events.”

The COVID Watch team is working with the makers of TraceTogether, who on March 20 in Singapore launched the first national app for using Bluetooth to track proximity to confirmed COVID-19 cases. The team is also working with creators of MIT’s PrivateKit: Safe Paths and Community Epidemiology in Action (CoEpi).

Stanford professor and Center for Policy, Outcomes and Prevention director Jason Wang held up Taiwan as proof that you can track COVID-19 cases without autocracy. He said the key to some of the lowest mortality and infection rates in the world is low-tech solutions like preparedness and understanding when evidence on the ground demands quick, decisive action. Officials in Taiwan are using heavy fines and phone tracking, as well as the Communicable Disease Controls Act, a law approved by lawmakers in 2003 after the breakout of SARS. In Taiwan, people can use maps supplied by government officials to see the local face mask supply. Infrared scanners in buildings, and text messages sent to people in the vicinity of confirmed cases, are also in use.

Repurposing research

Efforts by academics and researchers scrambling to redesign or repurpose existing solutions to fight COVID-19 was also a major topic of discussion.

Ryan Tibshirani leads the Delphi Group at Carnegie Mellon University, which is one of two CDC centers for excellence in the U.S. that create models for tracking the flu. Tibshirani said his group produced some of the most accurate flu prediction models in previous CDC Flu simulation competitions, but he added that this year, the group is adapting. Delphi Group plans to make coronavirus-tracking models at the county level in the United States available in the coming weeks.

“There are models that take things like [influenza-like illness] (ILI) prevalence that can then predict hospitalizations, predict critical hospitalizations, predict deaths, and these models are very well established for influenza. The question is, if we can train them fast enough … by ‘fast’ I mean, do we have enough training data to make useful forecasts for coronavirus?” he said.

Symptom surveys or screeners are not the same thing as a positive confirmation, but Tibshirani said symptom tracking can still be very useful. Determining whether a person has an ILI is important information right now since the majority of people with coronavirus show signs of fever, sore throat, and cough.

More surveillance that gathers ILI data, he said, could help fight the pandemic.

“A national or international surveillance system at this point which is capturing that would be extremely useful,” he said. “Data privacy is of course one of the foremost issues people discuss today, and this is an unfortunate time for this to have been such a big issue, because there are real issues of privacy surrounding sharing of medical data. That being said, what we’ve seen in our group — and I think this is true across, probably, anybody who’s working on COVID modeling and forecasting — is that there’s been more movement and more accomplished in the last two weeks than I’ve seen in years in terms of data sharing.”

Big tech

Following a series of presentations on apps for tracking COVID-19, a moderator said a common question among participants was why data from big tech companies wasn’t being used to power more solutions.

The White House has had conversations with tech companies like Facebook and Google about data sharing, according to the Washington Post.

The Harvard University and Boston Children’s Hospital epidemiologist John Brownstein described numerous projects for real-time COVID-19 mapping that range from conversational bots that ask questions about COVID-19 symptoms to social media data mining. Research published in Science last month used location and movement data from Baidu to prove the effectiveness of social distancing. Boston Children’s Hospital is working with the CDC to understand the impact of social distancing and areas. There’s also Covid Near You, an adaptation of Flu Near You that asks people how they’re feeling every day to map people exhibiting COVID-19 symptoms. Since its launch last week, COVID Near You has been used more than 250,000 times, and was built with engineering time donated by Google, Facebook, and Airbnb.

Cooperation from tech companies is “great as I’ve ever seen at any point in time,” Brownstein said, but he added that they remain under the same privacy and data pressure they’ve faced in the past several years.

Brownstein called the idea of a national surveillance tracking app unlikely without federal government action.

“It would be hard to do some of the same types of app-based approaches that we saw in [South] Korea, Israel, and other places without really major federal support, and I think that’s where there’s potentially a little bit of a disconnect between what’s happening at the federal government level and what they’re trying to push in terms of an app-based contact tracing approach,” Brownstein said.

Tibshirani said he’s also working with Google on an ILI survey that’s collected about a million responses in the past week or so from people about how they’re feeling.

“I think it requires a giant like Google or another kind of really big tech company, and I think there are many others of these in the works to adopt one survey or mode of surveillance and just make it available to everybody. I think that would be really beneficial to the entire community and so we’re working on making that happen,” he said.

Researchers focused on epidemiological models described a sense of urgency to tweak and adapt models before communities in the southern hemisphere see a rise in cases as they enter fall and winter seasons. Scholars with opinions on all sides of the surveillance spectrum also described some optimism about cooperation happening between academic journals, businesses, and researchers in response to COVID-19.

Source: Read Full Article


Microsoft brings new voice styles to Azure Cognitive Services

Microsoft today announced the launch of new neural text-to-speech (TTS) capabilities in Azure Cognitive Services, its suite of AI-imbued APIs and SDKs, that enable developers to tailor the voice of their apps and services to fit their brand. Each of three new styles — newscast, customer service, and digital assistant — offer fluid and natural-sounding speech that matches the patterns and intonations of human voices, allowing customers to deliver better, more memorable user experiences — in theory.

“Built on a powerful base model, our neural TTS voices are very natural, reliable, and expressive. Through transfer learning, the neural TTS model can learn different speaking styles from various speakers, enabling nuanced voices,” wrote Microsoft in a blog post.

The newscast voice reflects a “professional tone” you might hear on a TV or radio newscast, which is to say it contains no trace of regionalism and uses standard broadcasting pronunciation, a form of pronunciation in which no letters are dropped. In addition to Azure Cognitive Services, Microsoft says that the newscast-style voice is in the Microsoft Listening Docs for WeChat, which can read aloud Word, PowerPoint, and Excel documents and generate audio for online trainings, news podcasts, and more. It’s also in the Bing mobile app — when you search with the voice search feature, you’ll hear the news briefs using the newscast voice:

As for the customer service-style voice, it features a “friendly” and “engaging” tone that Microsoft says is tuned for scenarios involving customer support, like reporting a claim. By contrast, the digital assistant voice — which is available in two styles, a chat style for casual, conversational bots and a professional style for applications like in-car digital assistants — features a helpful tone that’s suited to relaying weather forecasts, navigation directions, reminders, and other such information.

Beyond the voice styles optimized for specific scenarios, Microsoft this morning released several new emotion styles, which can be adjusted to express different emotions to fit a given context. There’s cheerfulness or empathy, and in Chinese, there’s lyrical, which Microsoft describes as “heartfelt” and optimized to read prose or poetry.

The new voice styles are available in English and Chinese while the emotion styles are available for English, Chinese, and Brazilian Portugese, though not all of the styles are available in all languages. Microsoft notes that the styles can be customized through the Custom Neural Voice feature within Microsoft Speech Studio, allowing brands to build unique voices that benefit from the new scenarios.

Microsoft is effectively going toe to toe with Google, which last year debuted 31 new AI-synthesized WaveNet voices and 24 new standard voices in its Cloud Text-to-Speech service (bringing the total number of WaveNet voices to 57). It has another rival in Amazon, which recently launched a service — Brand Voice — that taps AI to generate custom spokespeople, and which offers a number of voice styles and emotion styles through Amazon Polly, Amazon’s cloud offering that converts text into speech.

Source: Read Full Article


Where To Get A Nintendo Switch

As retailers respond to increased demand for online goods due to the coronavirus, many items are starting to sell out on Amazon, Walmart, and other retail websites. While certainly not as crucial as hand sanitizer, tissues, toilet paper, and other household goods running low on Amazon, the Nintendo Switch is also seeing a major shortage right now, with stock for the main console practically sold out everywhere online. The $300 console is seeing increased demand as more people find themselves stuck indoors. Nintendo’s massively popular first-party game, Animal Crossing: New Horizons, also just released on March 20.

While the main system is sold out basically everywhere, you’ll have an easier time finding the Nintendo Switch Lite, which is smaller, handheld-only, and more affordable. Most stores are nearly out of stock for both systems (if they have stock at all), but some still have the Switch Lite selling for $200. If you’re primarily interested in using the Switch as a handheld device rather than playing it on the TV, the Switch Lite is an excellent option that’ll cost you less money, and it may be a great option for younger gamers who are stuck inside right now.

According to Nintendo, more Switch consoles will be restocked soon. “Nintendo Switch hardware is selling out at various retail locations in the US, but more systems are on the way. We apologize for any inconvenience,” Nintendo told GameSpot in an email statement.

Unfortunately, the increased demand and low availability mean some sellers have jacked up Switch prices on Amazon and sites like Ebay, going for well over $400. To help you out, we’ve rounded up everywhere the Switch is still in stock for its regular sell price–right now, that’s just the Switch Lite.

Nintendo Switch | $300

The Nintendo Switch got an upgraded model last year that notably extended the console’s battery life, lasting 4.5 to 9 hours (the older model lasted 2.5 to 6.5 hours). To ensure you’re getting the newer model, look for the red box and confirm the model number is HAC-001(-01). The Switch can be played handheld, docked through the TV, or propped up on any surface using detached Joy-Cons.

It’s rare to find the Nintendo Switch in stock right now, but we’ve seen it come back into stock a few times over the past week. Some places, like Gamestop and Best Buy, let you check availability at your local store, and if you can find it in stock, they may do curbside pickup so you don’t even have to go inside. If you’re looking to pair a new Switch console with Animal Crossing: New Horizons, however, you’re in luck.

GameStop has a “Starter Bundle” available to pre-order for $450. It’s expected to arrive April 30 and comes with the Switch console with Neon Joy-Con, a physical copy of Animal Crossing: New Horizons, Switch Pro controller, and a $20 Eshop card. It’s priced exactly the same as if you bought every item individually at retail cost. It’s currently sold out as well, but we may see it restocked soon.

Nintendo Switch Lite | $200

The Nintendo Switch Lite is the smaller, handheld-only version of the Switch that’s also much cheaper, selling for $200 instead of $300. It’s now available in four different colors (shown above): Gray, Turquoise, Yellow, and the brand-new shade, Coral. The Coral Switch Lite releases April 3, and you can pre-order it now. The Switch Lite can’t be docked and played on a TV.


  • See at Target — $200
  • See at Best Buy — $200


  • See at Target — $200


  • See at Target — $200
  • See at Best Buy — $200

Best Deals This Week

  • Free Games To Claim This Week: Rayman Legends, World War Z, And More
  • Amazon Just Launched A Big Sale On PS4, Switch, And Xbox One Games
  • PS Plus Drops To $40 In Limited-Time PS4 Deal
  • Where To Buy A Nintendo Switch Right Now
  • Fantastic Sale On Steam Games Includes Pre-Order Discounts On Resident Evil 3, Trials Of Mana

Source: Read Full Article


Waymo’s AI improves autonomous vehicle performance while saving costs

Waymo, Alphabet’s self-driving vehicle research division, today detailed a system — Progressive Population Based Augmentation (PPBA) — it claims has improved the performance of its autonomous systems while reducing the amount of data required to train them. Specifically, Waymo says that PPBA bolstered its cars’ object detection capabilities while decreasing costs and accelerating the training process.

It’s early days, but the approach could improve Waymo vehicles’ robustness in challenging driving scenarios — even while the fleet remains grounded by the coronavirus pandemic.

The situations Waymo’s cars encounter in the real world and in simulation give the company’s engineers opportunities to train the models underlying the Waymo Driver, Waymo’s full-stack driverless platform. By way of background, the Waymo Driver — which is now in its fifth generation — relies on a custom suite of lidar, cameras, and radars, as well as algorithms that enable it to interpret and respond to the sensor data.

Typically, ensuring these models are highly generalizable requires collecting a large, diverse set of training data and recruiting a human team to manually annotate the data. But PPBA automates the bulk of the process by discovering ways to synthesize additional data.

PPBA takes cues from AutoAugment, a Google Research and Google Brain project that uses various image augmentation operations such as rotation, cropping, image mirroring, and color shifting to morph and transform data. Trained through reinforcement learning, it selects the best augmentation policy — i.e., combination of augmentation operations — for a given sample set while reducing the computational cost of searching for policies.

PPBA also builds on Waymo’s existing data augmentation efforts. In early 2019, the company began applying techniques from a Google Brain and Google Research algorithm called RandAugment to image-based classification and detection tasks. Waymo reports that it achieved “significant” improvements in several classifiers and detectors as a result, including those that help classify foreign objects as construction equipment and animals.

PPBA targets lidar, which measures the distance to target objects by illuminating them with laser light and measuring the reflected pulses. Beyond 3D spatial information, logs from lidar sensors contain parameters including distance, operation strength, and sampling probability.


To discover policies designed for point cloud data sets, PPBA works on a point cloud augmentation search space containing eight operations, each of which is associated with a probability and specific parameters:

  • The original data sample
  • A ground truth augmentation (which has parameters denoting the probability for sampling vehicles, pedestrians, and cyclists)
  • A random flip
  • World scaling
  • Global translate noise (which has parameters for the distortion magnitude of translation operations on certain coordinates)
  • Frustum dropout
  • Frustum noise
  • Random rotation
  • Random drop laser points

Inspired by biological evolution, PPBA learns to optimize augmentation strategies by starting with multiple search spaces and replacing underperforming ones with “offspring.” It adopts at each iteration the best parameters discovered in past iterations.

Waymo claims that in experiments, PPBA achieved performance improvements across detection architectures and saved costs because it only needs labeled lidar data for training. “Our experiments show that by applying automated data augmentation to lidar data, we can significantly improve 3D object detection without additional data collection or labeling,” wrote Waymo in a blog post. “On the baseline 3D detection model, our method is up to 10 [times] more data efficient than without augmentation, enabling us to train machine learning models with fewer labeled examples, or use the same amount of data for better results, at a lower cost.”

Above: Waymo’s PPBA focuses on optimizing a subset of augmentation parameters of the whole search space in each training iteration. The best parameters in the past iterations are recorded as references for mutating parameters in future iterations.

It’s not the first time Waymo has applied AI to expedite backend tasks like data augmentation and search.

Waymo previously collaborated with DeepMind on PBT (Population Based Training), which managed to reduce false positives by 24% in pedestrian, bicyclist, and motorcyclist recognition tasks while cutting training time and computational resources in half. In point of fact, following a pilot study, PBT was integrated directly with Waymo’s technical infrastructure, enabling researchers from across the company to apply it with a button click.

More recently, Waymo pulled back the curtains on Content Search, which draws on tech similar to that which powers Google Photos and Google Image Search to let data scientists quickly locate almost any object in Waymo’s driving history and logs. The company says it has contributed to “many improvements” across its system, from detecting school buses with children about to step onto the sidewalk or people riding electric scooters to a cat or a dog crossing a street.

Source: Read Full Article


Cordio and Rambam Hospital will trial AI that detects coronavirus cases from speech samples

Cordio Medical today announced that it will begin a clinical trial at Haifa’s Rambam Hospital on an app-based AI system that analyzes speech to diagnose and remotely monitor COVID-19 patients. Assuming it passes scientific muster, the tech could be used to help treat self-quarantined patients who are likely to have COVID-19 but who haven’t yet been hospitalized. Worldwide infections stand at over 800,000 as of publication time.

Cordio’s COVID-19 solution is built on its existing HearO product, which was developed to remotely monitor patients suffering from heart failure by collecting recordings via mobile devices. Engineers at Cordio adapted it to identify bilaterial pneumonia with edema in the lungs that’s characteristic of COVID-19; the company claims it can pick up on the changes in lung fluids and onset of inflammation even before patients notice it.

HearO, which hasn’t yet obtained FDA approval, can sense fluid accumulation with a claimed over 80% accuracy. It’s already in use at ten medical institutions, including Rambam, Beilinson, Barzilai, Hagalil, Clalit Health Services, and others.

AI-powered voice diagnostics

Given the triaging that they might be able to perform for health systems, the renewed interest in voice diagnostic technologies, which were something of a niche prior to the pandemic, isn’t surprising.

Researchers from Carnegie Mellon and startup released an app that can determine whether someone has COVID-19 from a voice recording, and Vocalis Health says it’s working with Israel’s Health Ministry and Directorate for Defense Research and Development to collect “vocal biomarkers” of COVID-19 patients and healthy people to develop a screening technology.

They aren’t without caveats — Benjamin Striner, a graduate student who contributed to the Carnegie Mellon project, cautioned that the app’s accuracy can’t be tested because of a lack of verified data. But preliminary research suggests that AI-powered voice analyses can fairly accurately diagnose other conditions, including post-traumatic stress disorder and high blood pressure. To this end, a team at Mayo Clinic is exploring how AI might be used to detect vocal changes that could be predictive of an outcome, such as stroke, heart attack, concussion, and even Parkinson’s disease.

On the private sector side of the fence, Canary Speech has developed and completed clinical trials for vocal diagnostic models targeting Alzheimer’s disease, Parkinson’s disease, and depression. Another startup — Corti — sells a product that ingests and analyzes emergency call recordings to help dispatchers identify patients experiencing cardiac arrest. As for WinterLight Labs, whose platform measures hundreds of vocal biomarkers to determine cognitive impairment or mental health issues, it’s studying potential clinical applications for Alzheimer’s disease, dementia, aphasia, and depression.

Cordio Medical was founded in 2013 by Aviv Lotan together with Peregrine Ventures, an Isreal-based venture capital fund for early-stage investments.

Source: Read Full Article


Algorithmic Justice League protests bias voice AI and media coverage

A group of six influential women studying algorithmic bias, AI, and technology, released a spoken word piece titled “Voicing Erasure” to highlight racial bias in the speech recognition systems made by tech giants. Creators also made Voicing Erasure to recognize the exclusion and overlooked contributions of women scholars and researchers.

“Racial disparities in automated speech recognition” was published roughly a week ago, and the authors found that automatic speech recognition systems for Apple, Amazon, Google, IBM, and Microsoft achieve word error rates of 35% for African-American voices and 19% for white voices. Automatic speech recognition systems from these tech giants can do things like transcribe speech-to-text and power AI assistants like Alexa, Cortana, and Siri.

The Voicing Erasure project is the product of the Algorithmic Justice League, a group created by Joy Buolamwini. Others who participated in the computer science art piece include former White House CTO Megan Smith, Race After Technology author Ruha Benjamin, Design Justice author Sasha Costanza-Chock, and Kimberlé Crenshaw, a professor of law at Columbia Law School and UCLA.

“We cannot let the promise of AI overshadow real and present harms,” Benjamin said in the piece.

Buolamwini and collaborators carried out audits in 2018 and 2019, which lawmakers and activists  frequently cite as central to understanding race and gender disparities in the performance of facial recognition systems from tech giants like Amazon and Microsoft. Buolamwini was also part of the Coded Bias documentary, which made its premiere at the Sundance Film Festival earlier this year, and “AI, Aint I A Woman?,” a play on an 1851 Sojourner Truth speech with a similar name.

Additional audits are in the works, Buolamwini told VentureBeat, but the poetry was made to underscore “Racial disparities in automated speech recognition”. The Voicing Erasure project also recognized that voice assistants can reinforce gender stereotypes. Many major assistants today now offer both masculine and feminine voice options, with the exception of Amazon’s Alexa.

The poetic protest also recognizes that women researchers can encounter sexism, pointing to a New York Times article about the report that cites multiple male authors but fails to recognize lead author Allison Koenecke, who appears in Voicing Erasure. Algorithms of Oppression author Dr. Safiya Noble, who has also been critical of tech journalists, also participated in the spoken word project.

“Racial disparities in automated speech recognition” was published in the Proceedings of the National Academy of Sciences by a team of 10 researchers from Stanford University and Georgetown University. They found that Microsoft’s automatic speech assistant tech performed the best, while Apple and Google gave the worse performance.

Above: Stanford Computational Policy Lab

Each conversational AI system transcribed a total 42 white speakers and 73 African-American speakers from data sets with nearly 20 hours of voice recordings. Researchers focused on voice data from Humboldt County, California and Sacramento, California from data sets with African-American Vernacular English (AAVE) like Voices of California and the Corpus of Regional African American Language (CORAAL).

The authors said their findings are likely the result of insufficient audio data from African-Americans speakers to train speech recognition systems and highlight the need for speech recognition system makers, academics, and government sponsoring research to invest in inclusivity.

“Such an effort, we believe, should entail not only better collection of data on AAVE speech but also better collection of data on other nonstandard varieties of English, whose speakers may similarly be burdened by poor ASR performance—including those with regional and nonnative-English accents,” the report reads. “We also believe developers of speech recognition tools in industry and academia should regularly assess and publicly report their progress along this dimension.”

In statements following the release of the study, Google and IBM Watson pledged commitments to progress.

Source: Read Full Article


PlayStation 5: What We Know About Specs, Backwards Compatibility, Games, Release Date, Price, Controller, And More

We’re only a few months into 2020, and we’re already getting a bunch of new details around Sony’s next-generation console, the PlayStation 5. Despite the recent coronavirus pandemic situation, the system is fast approaching its launch sometime during this year’s holiday season, and although there’s still plenty we don’t know about it, the latest facts are slowly clearing the fog around what we can expect. Last year, Sony began to reveal some of the first details about their next-gen plans with PS5, and we only expect that to continue in the months ahead, particularly with the official website now launched. [Update: Sony hosted a livestream to discuss PS5 details. It was a largely technical discussion, but we’ve updated the text below with the most relevant details that Sony shared.]

First, we know what the console’s official name will be: unsurprisingly, the console is indeed called the PlayStation 5, just as was to be expected. Additionally, we’ve learned what the logo looks like, and it’s equally unsurprising. During 2019, it was revealed that the PS5 will also have PS4 backwards compatibility and SSD storage, and it will support PSVR. The company has also outlined some green, energy-efficient initiatives it is planning for the next generation.

PS5 And PS4 News And Announcements

  • PS4's PlayStation Plus Free Games For April 2020 Revealed
  • PS5's Release Shouldn't Be Affected By Coronavirus, Sony Says
  • PS5 Specs: Sony Reveals CPU, GPU, SSD, TFLOPs Details And More
  • Everything We Know About PS5's Price, Specs, Games, And Release Window

If Sony follows the outline for how the PS4 was properly unveiled, we can expect it to host a special PlayStation Meeting event. There had been some speculation that such an event could take place in February but that didn’t happen. If or when such an event is held, that could be when we learn about things like a price and launch game lineup. However, when Sony intends to roll out new info, it won’t be at the venue you might expect. Just like last year’s E3, Sony planned to skip E3 2020 before it was canceled, and ditto for GDC 2020 due to coronavirus concerns.

Below you can find all the details known about Sony’s upcoming console. It’s likely we’ll hear more details–big or small–in the coming months, so be sure to bookmark this page and check back often as more gets confirmed.

  • PS5 Release Date
  • PS5 Price
  • Will PS5 Be Backwards Compatible With PS4 games?
  • PS5 Specs And Disc Drive
    • PlayStation 5 Specs
  • PS5 Controller
  • Can I Use Expandable Storage With PS5?
  • Will PS5 Support PSVR?
  • The Logo
  • Energy Efficiency
  • PS5 Games
  • PS5 UI
  • PS5 Release Date

    This is surely the question on everybody’s lips: when will the PS5 come out? Sony, as you’d expect, is tight-lipped on the matter, but it has confirmed a “holiday 2020” release window for the PlayStation 5. The company has been no more specific than that–Microsoft has also provided the same vague window–but historically, November has been a frequent month for console releases in the past.

    PS5 Price

    Again, Sony has not stated how much its new console will cost, but it did say that the PS5’s price will be attractive to gamers. “I believe that we will be able to release it at an SRP [suggested retail price] that will be appealing to gamers in light of its advanced feature set,” said Mark Cerny, the lead architect of the PS4 who’s currently working on its successor.

    Of course, you wouldn’t expect Sony to say anything different, but one gets the feeling the company has learned from the PS3’s exorbitant price tag–and the console’s subsequent struggles–and the PS4’s more reasonable cost and subsequent successes.

    Will PS5 Be Backwards Compatible With PS4 games?

    Cerny also confirmed the PS5 will be backwards compatible with PS4 games, as the two consoles are built upon similar internal architectures. This will be welcome news for those who were disappointed by the PS4’s lack of backwards compatibility with PS3, PS2, and PS1 games.

    In terms of which games players can expect to play, the full list hasn’t been confirmed. However, Cerny noted that PS4’s 100 most played games will be playable on PS5 at launch and all will run even better than before. “Running PS4 and PS4 titles at boosted frequencies has also added complexity,” Cerny said. “The boost is truly massive this time around and some game code just can’t handle it. Testing has to be done on a title by title basis. Results are excellent though. We recently took a look at the top 100 PlayStation 4 titles as ranked by playtime and we’re expecting almost all of them to be playable at launch on PlayStation 5.”

    He also discussed how backwards compatibility will be approached in the PS5 internally, and how the functionality will not be removed upon subsequent console iterations given the nature of their approach to designing the feature. “One way you can achieve backwards compatibility is to put the previous console’s chip set into the new console, like we did with some PlayStation 3s,” he said.”But that’s, of course, extremely expensive. A better way is to incorporate any differences in the previous console’s logic into the new console’s custom chip. Meaning that, even as the technology evolves, the logic and feature set that the PlayStation 4 and PlayStation 4 Pro titles rely on is still available in backwards compatibility mode. One advantage of this strategy is that, once backwards compatibility is in the console, it’s in. It’s not as if a cost down will remove backwards compatibility like it did on PlayStation 3.”

    SIE president Jim Ryan told GameSpot sister site CNET backwards compatibility and cross-gen are important for the PS5 to help players have a seamless transition.

    “Whether it’s backwards compatibility or the possibility of cross-generational play, we’ll be able to transition that community to next-gen,” he said. “It won’t be a binary choice about whether you have to be either on PlayStation 4 or next-gen to continue your friendship.”

    In yet more welcome news, PS4 games will even run faster than they do on your current console, in part because the PS5 will contain a solid state drive, as opposed to hard drives that current consoles ship with. Cerny has demonstrated a load screen from Insomniac’s Spider-Man taking less than a second on a PS5 development kit, compared with 15 seconds on a PS4 Pro.

    Sony showed off the faster loading times during an investor presentation in May. You can see the video below, which was captured by Wall Street Journal reporter Takashi Mochizuki.

    PS5 Specs And Disc Drive

    This is where Sony has been surprisingly forthright with new information. The company has confirmed the PlayStation 5 will contain an AMD chip that has a CPU based on the third-generation Ryzen. It’ll have eight cores of the seven-nanometer Zen 2 microchip. The console will also support 8K gameplay, but this will of course be dependent upon TVs catching up.

    Graphics will be driven by a custom version of Radeon’s Navi line. This graphics chip will support ray-racing, something which is starting to become popular in movies and video games. Although it is traditionally thought of as a lighting technique, Cerny says this technique could also improve game audio. In fact, PS5 will fully support 3D audio.

    The aforementioned SSD is a big detail too, as it means games will load faster and be able to handle more objects on-screen at once than current HDD-driven consoles. Characters and cameras could move faster through game worlds, as environments could be loaded in much faster than they are at present. It will also lead to faster install times for games and patches.

    As one final detail, we know the PS5 will not go the route of the Xbox One S All-Digital Edition, which doesn’t include a disc drive. Instead, the PS5 will include a disc drive, so rest assured you’ll still be able to buy and play physical games. PS5 discs will have a capacity of 100 GB, and the console will also support 4K Blu-Rays.

    Sony has also confirmed that the PS5 is capable of supporting 4K visuals at 120Hz for those who have TVs that can support that. 120Hz is a refresh rate around double the rate of standard TVs. You can check the full specs below:

    PlayStation 5 Specs

    Component Spec
    CPU 8x Zen 2 Cores at 3.5GHz (variable frequency)
    GPU 10.28 TFLOPs, 36 CUs at 2.23GHz (variable frequency)
    GPU Architecture Custom RDNA 2
    Memory/Interface 16GB GDDR6/256-bit
    Memory Bandwidth 448GB/s
    Internal Storage Custom 825GB SSD
    IO Throughput 5.5GB/s (Raw), Typical 8-9GB/s (Compressed)
    Expandable Storage NVMe SSD Slot
    External Storage USB HDD Support
    Optical Drive 4K UHD Blu-ray Drive

    PS5 Controller

    Sony has opened up on its vision for the PlayStation 5’s controller by saying: “One of our goals with the next generation is to deepen the feeling of immersion when you play games, and we had the opportunity with our new controller to reimagine how the sense of touch can add to that immersion.

    “To that end, there are two key innovations with the PlayStation 5’s new controller. First, we’re adopting haptic feedback to replace the ‘rumble’ technology found in controllers since the 5th generation of consoles. With haptics, you truly feel a broader range of feedback, so crashing into a wall in a race car feels much different than making a tackle on the football field. You can even get a sense for a variety of textures when running through fields of grass or plodding through mud.

    “The second innovation is something we call adaptive triggers, which have been incorporated into the trigger buttons (L2/R2). Developers can program the resistance of the triggers so that you feel the tactile sensation of drawing a bow and arrow or accelerating an off-road vehicle through rocky terrain. In combination with the haptics, this can produce a powerful experience that better simulates various actions. Game creators have started to receive early versions of the new controller, and we can’t wait to see where their imagination goes with these new features at their disposal.”

    In addition to the above comments from Sony Interactive Entertainment president Jim Ryan, Sony has also separately confirmed to Wired that the controller “doesn’t have a name yet,” but contains an improved speaker, USB-C connectivity, and a larger-capacity battery.

    Can I Use Expandable Storage With PS5?

    The PS5 allows users to install non-Sony proprietary expandable storage, giving you the option to purchase off-the-shelf parts to slot into the system. According to a Digital Foundry report, “NVMe PC drives will work in PlayStation 5, [but] the only problem is that PC technology is significantly behind PS5 [and] Sony needs to validate them to ensure that they will work properly.”

    In Cerny’s talk discussing the PS5, he noted that Sony was testing expandable storage devices to ensure that they can physically fit into the PS5 and also deliver the required performance. The company has advised people not to purchase drives ahead of it revealing its recommendations, so don’t jump the gun just yet in your efforts to prepare yourself for the incoming launch. PS5-compatible SSD storage is expected to come “a bit past” launch, though.

    Will PS5 Support PSVR?

    The current PSVR will indeed be supported by PS5, as will the PlayStation Move controllers. “I won’t go into the details of our VR strategy,” Cerny has stated, “beyond saying that VR is very important to us and that the current PSVR headset is compatible with the new console.” The system architect stopped short of saying whether a new PSVR device will ever come out, however.

    The Logo

    It was believed that Sony’s briefing at CES 2020 could bring with it some PlayStation 5 news, but that only barely turned out to be the case. Sony’s Jim Ryan appeared on stage to share some impressive stats regarding the PS4 and to reveal the PlayStation 5 logo, which you can see below. It’s hardly surprising, maintaining the same style used for recent generations. Beyond that, Sony had nothing new to discuss about PS5 during the tech-focused event.

    Energy Efficiency

    Though Sony has been generally quiet about the PS5 since its announcement, it came forward to reveal just slightly more information in conjunction with the announcement of a United Nations initiative. The Playing for the Planet alliance aims to make the video game industry more environmentally friendly. To that end, Sony announced that the next generation of its hardware will use dramatically less power when placed into suspend/sleep mode. Just one million PS5 users turning on the energy-efficient feature would save enough electricity to power 1,000 US homes. The company is also conducting a carbon footprint assessment and looking into more energy-efficient data centers.

    PS5 Games

    We don’t yet know many confirmed games coming to PS5. It’s reasonable to presume annual titles like FIFA and Call of Duty would make the jump to PS5, though their developers haven’t stated anything concrete.

    Currently, the only confirmed PS5 game is Godfall, a loot-based melee action-RPG. It’s scheduled for late 2020, which more than likely makes it a PS5 launch game. However, time will tell!

    The other game possibly confirmed to be playable on PS5 is the Final Fantasy VII remake, after Square Enix’s president and CEO Yosuke Matsuda said: “I believe that our teams have made it so that the game will support both the next generation and the current generation of consoles. I believe it is being developed so that it is going to be playable on both, so I’m not really concerned about that and I believe that the fans are also going to be able to enjoy it on both, including the next-generation of consoles.”

    However, it’s not clear whether Matsuda was referring to a dedicated PS5 edition of the game, or if he was simply referring to the PS4 version being playable on PS5 via backward compatibility. If it’s the latter, then we can technically count every PS4 game as playable on PS5.

    Finally, we know Bluepoint Games, the studio behind remakes such as Shadow of the Colossus and Uncharted: The Nathan Drake Collection, is working on a PS5 game that it says is a “big” one.

    PS5 UI

    PlayStation 5’s new user interface will allow you to see more details about friends’ games without opening the applications themselves. “Even though it will be fairly fast to boot games,” says Cerny, “We don’t want the player to have to boot the game, see what’s up, boot the game, see what’s up. Multiplayer game servers will provide the console with the set of joinable activities in real time. Single-player games will provide information like what missions you could do and what rewards you might receive for completing them—and all of those choices will be visible in the UI. As a player you just jump right into whatever you like.”

    Source: Read Full Article


Axonius raises $58 million to automate device security management

Axonius, a cybersecurity startup developing an end-to-end device management platform, today announced that it has secured $60 million in equity financing. Cofounder and CEO Dean Sysman said that the new capital will be used to expand Axonius’ cybersecurity asset management platform offerings, which is fortuitous — according to Symantec, internet of things devices experience an average of 5,200 attacks per month.

“Our exponential growth in revenue and customers can be attributed to the fact that we’re solving a problem that companies of all sizes and industries face across the globe. The opportunity is massive, and this new funding round will allow us to continue to aggressively invest in our platform,” Sysman told VentureBeat via email. “We have a big vision at Axonius, and we’re here to stay. We’re focused on building a formidable, independent, pure-play cybersecurity company that can solve the asset management challenge once and for all, and let security and IT teams get back to focusing on what’s important.”

Axonius’ agentless solution streamlines asset management and spotlights coverage gaps by automatically validating and enforcing security policies. It connects with existing software and networking gear to build an inventory of assets that spans cloud and on-premises environments, whether the devices are managed or unmanaged.

Axonius supports one-off and ongoing queries that help to illustrate how assets relate to security policies, and it packs in trigger functionality that enables the programming of rules that kick off enforcement responses like software installs and device scans. Its cybersecurity capabilities are bolstered further by support for third-party apps and services — Axonius integrates with over 200 platforms including Active Directory and cloud instances like Amazon, as well as endpoint protection tools, NAC solutions, mobile device management, VA scanners, and more.

For instance, the company’s recently launched Cloud Asset Compliance service leverages data from public cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud to automatically determine how cloud workload, configuration details, and accounts comply with industry security benchmarks. One of those benchmarks is CIS Benchmarks, a set of continuously verified best practices for securing systems and data against attack.

Investors like Arsham Memarzadeh — general partner at Lightspeed Venture Partners, which led this funding round — believe that these and other features put Axonius leagues ahead of rivals like Zededa, which raised $15.9 million in February; Armis Security, which secured $65 million in April; Vdoo, which recently nabbed $32 million; and Mocana, which raised $15 million in March. In any case, Axonius currently covers millions of devices for customers including New York Times, Schneider Electric, Thermo Fisher, Landmark Health, and AppsFlyer. And with an eye toward growth, in February the company expanded its platform for use by federal agencies.

Axonius, which was founded in 2017, has offices in New York and Tel Aviv. Its latest fundraising round — a series C funding — was led by Lightspeed Venture Partners with participation from existing investors OpenView, Bessemer Venture Partners, YL Ventures, Vertex, and WTI. It brings Axonius’ total raised to $97 million following a $20 million series B in August 2019 and a $13 million series A last February, and it comes after a banner year in which the company’s customer base grew 910% and the size of its team doubled.

Source: Read Full Article


Via raises $200 million to optimize ride-sharing routes with AI

Transportation startup Via today announced that it has raised $200 million in series E financing, bringing its total raised to over $500 million at a $2.25 billion valuation — more than double its $1 billion valuation in 2017. According to cofounders Oren Shoval and Daniel Ramot, part of the funding will enable Via to expand its efforts around the novel coroanvirus outbreak, which include emergency services and transit network optimization in locations like Berlin, Ohio, and Malt.

Via’s ride-sharing service taps AI to combine multiple passengers or packages headed in the same direction in real time, ostensibly reducing urban congestion and emissions while providing a lower-cost mobility service. It’s available in more than 70 cities in 20 countries, and to date, Via says it has provided over 70 million rides around the world, up from 50 million as of July 2019.

Via first launched in New York City in 2013, but it’s now deployed globally, including in Europe as ViaVan. As of today, the company works with more than 100 partners across municipalities, public transit agencies, transportation operators, corporations, schools, and universities to optimize their transport systems, including the city of West Sacramento, the Los Angeles Metro, Transport for London (TfL), and Sydney’s Transport for New South Wales (TfNSW), and the New York City Department of Education.

Last year, Via announced the launch of a driverless shuttle program in New South Wales, Australia, in partnership with the BusBot project, local bus operator Busways, local government agency Transport for New South Wales, and startup EasyMile. More recently, it launched an on-demand transportation pilot in Sacramento, California, that let customers hail rides from apps and the city’s public transportation department.

Via, like its chief rivals Uber and Lyft, faces an uphill battle as COVID-19 rages on around the world. Earlier this month, it suspended shared rides in areas like New York City and San Francisco in order to prevent infection, a blow to its shared-ride business model. And a report from Edison Trends found that consumer spending on ride-sharing plummeted up to 21% in the seven-day period ending March 16, with further declines likely as business closures and shelter-in-place orders spurred by the coronavirus pandemic continue.

Exor led this latest funding round in Via, with participation from new investors Shell, Macquarie Capital, and Mori Building also participated in the round, as did existing investors Pitango, 83North, Hearst Ventures, Ervington Investments, Planven Ventures, Broadscale Group, and RiverPark Ventures. Noam Ohana, head of Exor Seeds, the early stage investment arm of Exor, will join Via’s board of directors as a part of it.

Source: Read Full Article