If you think of AI as something futuristic and abstract, start thinking different.
We’re now witnessing a turning point for artificial intelligence, as more of it comes down from the clouds and into our smartphones and automobiles. While it’s fair to say that AI that lives on the “edge”—where you and I are—is still far less powerful than its datacenter-based counterpart, it’s potentially far more meaningful to our everyday lives.
One key example: This fall, Apple’s Siri assistant will start processing voice on iPhones. Right now, even your request to set a timer is sent as an audio recording to the cloud, where it is processed, triggering a response that’s sent back to the phone. By processing voice on the phone, says Apple, Siri will respond more quickly. This will only work on the iPhone XS and newer models, which have a compatible built-for-AI processor Apple calls a “neural engine.” People might also feel more secure knowing that their voice recordings aren’t being sent to unseen computers in faraway places.
Google actually led the way with on-phone processing: In 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One reason Google decided to build its own phones was that the company saw potential in creating custom hardware tailor-made to run AI, says Brian Rakowski, product manager of the Pixel group at Google.
These so-called edge devices can be pretty much anything with a microchip and some memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of the long-delayed promises of AI, like more responsive smart assistants, better automotive safety systems, new kinds of robots, even autonomous military machines.
The challenges of making AI work at the edge—that is, making it reliable enough to do its job and then justifying the additional complexity and expense of putting it in our devices—are monumental. Existing AI can be inflexible, easily fooled, unreliable and biased. In the cloud, it can be trained on the fly to get better—think about how Alexa improves over time. When it’s in a device, it must come pre-trained, and be updated periodically. Yet the improvements in chip technology in recent years have made it possible for real breakthroughs in how we experience AI, and the commercial demand for this sort of functionality is high.
From swords to plowshares
Shield AI, a contractor for the Department of Defense, has put a great deal of AI into quadcopter-style drones which have already carried out—and continue to be used in—real-world combat missions. One mission is to help soldiers scan for enemy combatants in buildings that must be cleared. The DoD has been eager to use the company’s drones, says Shield AI’s co-founder, Brandon Tseng, because even if they fail, they can be used to reduce human casualties.
“In 2016 and early 2017, we had early prototypes with something like 75% reliability, something you would never take to market, and the DoD were saying, ‘We’ll take that overseas and use that in combat right now,’” Mr. Tseng says. When he protested that the system wasn’t ready, the response from within the military was that anything was better than soldiers going through a door and being shot.
In a combat zone, you can’t count on a fast, robust, wireless cloud connection, especially now that enemies often jam wireless communication and GPS signals. When on a mission, processing and image recognition must occur on the company’s drones themselves.
Shield AI uses a small, efficient computer made by Nvidia, designed for running AI on devices, to create a quadcopter drone no bigger than a typical camera-wielding consumer model. The Nova 2 can fly long enough to enter a building, and use AI to recognize and examine dozens of hallways, stairwells and rooms, cataloging objects and people it sees along its way.
Meanwhile, in the town of Salinas, Calif., birthplace of “Grapes of Wrath” author John Steinbeck and an agricultural center to this day, a robot the size of an SUV is spending this year’s growing season raking the earth with its 12 robotic arms. Made by FarmWise Labs Inc., the robot trundles along fields of celery as if it were any other tractor. Underneath its metal shroud, it uses computer vision and an edge AI system to decide, in less than a second, whether a plant is a food crop or a weed, and directs its plow-like claws to avoid or eradicate the plant accordingly.
FarmWise’s huge, diesel robo-weeder can generate its own electricity, enabling it to carry a veritable supercomputer’s worth of processing power—four GPUs and 16 CPUs which together draw 500 watts of electricity.
In our everyday lives, things like voice transcription that work whether or not we have a connection, or how good it is, could mean shifts in how we prefer to interact with our mobile devices. Getting always-available voice transcription to work on Google’s Pixel phone “required a lot of breakthroughs to run on the phone as well as it runs on a remote server,” says Mr. Rakowski.
Google has almost unlimited resources to experiment with AI in the cloud, but getting those same algorithms, for everything from voice transcription and power management to real-time translation and image processing, to work on phones required the introduction of custom microprocessors like the Pixel Neural Core, adds Mr. Rakowski.
Turning cats into pure math
What nearly all edge AI systems have in common is that, as pre-trained AI, they are only performing “inference,” says Dennis Laudick, vice president of marketing for AI and machine learning at Arm Holdings, which licenses chip designs and instructions to companies such as Apple, Samsung, Qualcomm, Nvidia and others.
Generally speaking, machine-learning AI consists of four phases:
- Data is captured or collected: Say, for example, in the form of millions of cat pictures.
- Humans label the data: Yes, these are cat photos.
- AI is trained with the labeled data: This process selects for models that identify cats.
- Then the resulting pile of code is turned into an algorithm and implemented in software: Here’s a camera app for cat lovers!
(Note: If this doesn’t exist yet, consider it your million-dollar idea of the day.)
The last bit of the process—something like that cat-identifying software—is the inference phase. The software on many smart surveillance cameras, for example, is performing inference, says Eric Goodness, a research vice president at technology-consulting firm Gartner. These systems can already identify how many patrons are in the restaurant, if any are engaging in undesirable behavior, or if the fries have been in the fryer too long.
It’s all just mathematical functions, ones so complicated that it would take a monumental effort by humans to write them, but which machine-learning systems can create when trained on enough data.
Robot pratfalls
While all of this technology has enormous promise, making AI work on individual devices, whether or not they can connect to the cloud, comes with a daunting set of challenges, says Elisa Bertino, a professor of computer science at Purdue University.
Modern AI, which is primarily used to recognize patterns, can have difficulty coping with inputs outside of the data it was trained on. Operating in the real world only makes it tougher—just consider the classic example of a Tesla that brakes when it sees a stop sign on a billboard.
To make edge AI systems more competent, one edge device might gather some data but then pair with another, more powerful device, which can integrate data from a variety of sensors, says Dr. Bertino. If you’re wearing a smartwatch with a heart-rate monitor, you’re already witnessing this: The watch’s edge AI pre-processes the weak signal of your heart rate, then passes that data to your smartphone, which can further analyze that data—whether or not it’s connected to the internet.
The overwhelming majority of AI algorithms are still trained in the cloud. They can also be retrained using more or fresher data, which lets them continually improve. Down the road, says Mr. Goodness, edge AI systems will begin to learn on their own—that is, they’ll become powerful enough to move beyond inference and actually gather data and use it to train their own algorithms.
AI that can learn all by itself, without connection to a cloud superintelligence, might eventually raise legal and ethical challenges. How can a company certify an algorithm that’s been off evolving in the real world for years after its initial release, asks Dr. Bertino. And in future wars, who will be willing to let their robots decide when to pull the trigger? Whoever does might end up with an advantage—but also all the collateral damage that happens when, inevitably, AI makes mistakes.
—For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.
Write to Christopher Mims at christopher.mims@wsj.com
https://ift.tt/3qtDWQf
Gadget
Bagikan Berita Ini
0 Response to "How AI Is Taking Over Our Gadgets - The Wall Street Journal"
Post a Comment