13 innovative artificial intelligence initiative

13 innovative artificial intelligence initiative

After writing an article about concrete use case of AI in the corporate world, I was a bit disappointed about the pace of digital transformation. I mean thinking about it, the case of Walmart using Artificial Intelligence attacking supply and demand planning is impressive but that topic being so deep, there is much more to explore about it.

The best way to understand something is to study it. Let’s explore the possibilities and see how to reframe our future with artificial intelligence. I had a look at some innovative projects around the world and discuss some misconceptions humans have about artificial intelligence.

  1. Misconceptions of AI
  2. Shimon the musician robot by Georgia Tech
  3. The Skywalker Arm
  4. The Euphoria Project by Google
  5. Fighting Blindness with AI
  6. Enhance humans by MIT
  7. Intuition with Rho AI
  8. Selfdriving cars – Roborace
  9. Selfdriving trucks – TuSimple
  10. Talos Project by RoboHub
  11. Preserving Wildlife with Trailguard AI
  12. Fighting Climate Change – NotCo
  13. Preventing Earthquake – ShakeAlert
  14. Agritech In Netherland

Misconceptions of AI

One of the biggest misconceptions of AI is that there is a super intelligent being called generalized AI that knows all, can do all, smarter than all of us put together.

That is a total misconception. AI is built on us, AI is mimicking our thought processes, AI is basically an emulation of us. Human cooperation with intelligent machines will define the next era of history.

Using a machine which relates to rest of the world through the internet that can work as a creative collaborative partner. I believe that machines are going to interact with humans just the way we interact with one another -through perception, through conversation.

As AI continues to become mainstream, it needs to really understand humans. That is why many researchers want to build emotion AI that enables machines to have empathy.

Shimon the singer songwriter robot

Shimon, the first AI that compose music by itself

People often say that creativity is the one thing that machines will never have. The surprising thing is that it’s actually the other way around. Arts and creativity are actually easier than problem solving.

We already have computers that make great paintings, that makes music. That’s indistinguishable from music that’s composed by people. Machines are actually capable of creativity.

Shimon giving a jazz concert in Aspen

Created in the Georgia’ Tech’s center for music technology, Shimon is a marimba playing robot. What it does it listen to humans playing, and it can improvise.

Shimon use machine learning to find patterns in data. After being fed with different artists music, it can create morphing of music that humans would never create.

The Skywalker arm

Another innovation from the Georgia Tech center is the Skywalker arm. Currently, most advanced prosthetic hands can’t even thumb up or flip you the bird.

They can only open or grip using all five fingers at once. The Skywalker hand inspired by Luke Skywalker from Star Wars, the revolutionary tech brings what was once the realm of sci fi a little closer to our galaxy.

Most of the prosthetics that are available on the market nowadays actually use EMG technology which stands for electromyography. And essentially what it does is there are two sensors that make contact with a residual limb and may pick up alleged electrical signals from the muscles.

It will open and close the hand, and its user can rotate as well. But the problem with EMG is its very vague electrical signals, so zero to 100%.

It’s not very accurate at all. The Skywalker arm uses ultrasound tech, ultrasound provides an image and you can see everything that’s going on inside of the arm.

Ultrasound uses high frequency sound waves to capture live images from inside the body. As the user flexes his muscles to move each of his missing fingers, ultrasound generates live images that visualize his intention.

AI then uses machine learning to predict patterns, letting a man who’s lost one of his hands move all five of his fingers individually.

The Euphoria Project by Google

Euphoria has two different goals. One is to improve speech recognition for people who have a variety of medical conditions.

The second goal is to give people their voice back, which means recreating the way they used to sound before they were diagnosed. If you think about communication, it starts with understanding someone and then being understood.

For a lot of people, their voice is like their identity.

In the US alone, roughly one in 10 people suffer acquired speech impairments, which can be caused by anything from Amyotrophic Lateral Sclerosis (ALS) to strokes to Parkinson’s to brain injuries.

Solving it is a big challenge.

Voice Imitation

Voice imitation is also known as voice synthesis, which is basically speech recognition in reverse. First, machine learning converts text back into waveforms, these waveforms are then used to create sound. This is how Alexa and Google Home are able to talk to us.

Speech recognition

So how does speech recognition work? First, the sound of our voice is converted into a waveform, which is really just a picture of the sound. Waveforms are then matched to transcriptions or labels for each word.

These maps exist for most words in the English language. This is where machine learning takes over. Using millions of voice samples, a deep learning model is trained to map input sounds to output words, then the algorithm uses rules such as grammar and syntax to predict each word in a sentence.

This is how AI can tell the difference

The speech recognition model that Google uses work very well, for people who have a voice that sounds similar to the examples that were used to train this model, in 90% of cases it will recognize what you want to say.

Fighting Blindness

Complications of diabetes include heart disease, kidney disease, but one of the important complications is diabetic retinopathy.

And the reason it’s so important is that diabetic retinopathy is one of the leading causes of blindness worldwide.

If a doctor is examines the eye or you take a picture of the back of the eye, you will see lots of those bleeding spots in there.

Today there is not enough doctors to perform the screening. There is a very limited of ophthalmologists. There should be other ways where you can screen the diabetic patients for the diabetic complications.

Eyenuk, Inc. is a global digital health company and leader in real-world AI eye screening for autonomous disease detection and they got this year their FDA approval for their autonomous AI system for diabetic retinopathy screening.

First, models are trained using tagged images of things like cats or dogs. After looking at 1000s of examples, the algorithm learns to identify new images without any human help.

For the retinopathy project, over 100,000 eye scans were graded by eye doctors who rated each eye scan on a scale from one to five, from healthy to diseased.

These images were then used to train a machine learning algorithm. Over time, the AI learned to predict which images showed signs of disease

The patient comes in, they get their pictures taken up the back of the eye, one for the left eye, one for the right eye, the images are uploaded to this algorithm.

Once the algorithm performs this analysis, it sends the results back to the system along with a referral recommendation.

Because the algorithm works in real time, you can get a real time answer that comes back to the patient.

Once you have the algorithm, it’s like taking your weight measurement. Within a few seconds the system tells you whether you have retinopathy or not.

Retinopathy is when high blood sugar damages the retina. Blood leaks and the laser treatment basically welds the blood vessels to stop the leakage. Routine eye exams can spot the problem early.

AI now is this next generation of tools that we can apply to clinically meaningful problems. AI really starts to democratize healthcare, mental health disorders like depression or anxiety: there are facial and vocal biomarkers of these mental health disorders

Enhance Humans by MIT

The way in which limbs are amputated has not fundamentally changed since the US Civil War.

Center for Extreme Bionics at the MIT Media Lab invented the agonist-antagonist myoneural interface (AMI). The AMI is a method to restore proprioception to persons with amputation.

Even of new generation prosthetic allow amputees to find back abilities, they will not feel complete with the lack of sensation. The agonist-antagonist myoneural interface can help to bridge the gap.

At MIT, they are developing a novel way of amputating limbs. They actually create little biological joints by linking muscles together in pairs.

When a person thinks and moves the limb that’s been amputated away, these muscles move and send sensations that they can directly linked to a bionic limb in a bidirectional way.

Not only can the person think and actuate, the synthetic limb, but they can feel those synthetic movements within their own nervous system.

Until recently, creating a bionic limb that a person can actually feel has been more science fiction than reality. Now, machine learning is revolutionizing the way we think about medicine.

They developed an algorithm that make a virtual model of a missing biological limb. When the patient fires his muscles with his brain, they use an electrode to measure that signal.

That drives the virtual muscle and send sensations to the brain, about the position and dynamics kind of instantly felt part of you almost as good as a natural foot.

Building Superhuman abilities

When it comes to superhuman ability, you may think of people like LeBron James, Michael Phelps or Serena Williams. But it’s not just the body that can be enhanced. Sometimes it’s something less tangible, like human intuition.

Back in the day, intuition used to play a big part in sports, athletes and coaches relied on their gut to make decisions. Now, some competitors are leaning more and more on machine learning, looking to gain an extra edge

Strength of these AI systems come in having access to a ton of data and being able to find patterns in that data, generating insights and inferences that maybe people may not be aware of.

Imagine augmenting people’s abilities to make decisions based on that data. Machine learning is transforming many industries and applications, especially in areas where there’s a lot of data.

Predicting outcomes can have a big payoff, finance, sports, or medicine come to mind. Using an emerging technology, like machine learning and a classic old school sport like stock car racing doesn’t necessarily sit well with everybody. This is what RhoAI is working on.

Their tool basically is analyzing the optimum strategy call of every car in the field real time. Not just our car, but every car. They collect the braking, steering throttle all the acceleration off of every car in the field. Real Time.

All this data is being fed into an AI program called pit row. Sensors in every car measure speed, throttle, braking, and steering. Advanced GPS tracks the car’s position on the track. All this data is made available to every team.

In a NASCAR race. pit stops are the key to a winning strategy. They’re using an AI technique called reinforcement learning, which is basically when the computer is given the rules of the game, plays it over and over till it learns every possible moving outcome.

Then through trial and error and patients that no human could possibly have. This is the same approach that made Deepmind Technologies (acquired by Google in September 2010) elaborate the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history: “AlphaGo”.

This is also what made OpenAI Five win at the famous video game Dota 2 from Blizzard.

Convincing humans that machines know what they’re doing is the central difficulty in deploying AI out in society? Do we trust the AI to make decisions for us? We already do it with GPS maps.

Selfdriving cars – Roborace

It’s hard to know if machine learning will ever decode the mysteries of love or creativity. Maybe it’s not even a mystery, just data points.

But what about other human qualities like instinct? Driving a car already requires us to make countless unconscious decisions. AI is learning to do that. But can we teach it to do more?

British startup Roborace wants to break new ground in driverless cars. To do so they believe they need to test the boundaries of the technology.

Working at the very outer edge of what’s safe and possible, where the margin for error is razor thin. After years of trial and error, they’ve created the world’s first AI racecar.

More than 50 companies around the world are working to bring self-driving cars to city streets. The promise of driverless taxis, buses and trucks is transformative. It’ll make our world safer and cleaner, changing the way our cities are designed societies function, even how we spend our time.

Think about a self-driving car out in the real world. In order to build that system and have it work, it’s got to be virtually perfect.

If you had a 99% accuracy rate, that wouldn’t be anywhere near enough because once you take that 1% error rate, and you multiply that by millions of cars on the road, I mean, you’d have accidents happening constantly.

The error rate has to be extraordinarily low to make this works.

As a human, you have lots of advantages over a computer. You know exactly where you are in the world, you have eyes that can enable you to see things.

They need to implement technology on vehicles to enable them to see the world. They use military grade GPS. They also use LIDAR sensors. These are basically laser scanners they create for the vehicle, a 3D map of the world around us.

There’s one last thing that they use vehicle to vehicle communication between the cars. Each of them can tell the other car the position of it on the track. Just to be clear, your phone does not come with military grade GPS.

Selfdriving truck – TuSimple

Tu Simple

We’ve all heard about self-driving cars, but self-driving trucks? Why are they setting the pace of the autonomous driving industry?

We already have robotic airplanes most airliners fly themselves. But self-driving cars are a harder problem because the roads have a lot more things going on than the air does.

Driving on the freeway is much easier than driving in the city. We will see fleets of automated trucks long before we see self-driving cars in the city.

Driving the cars what is called an AI complete problem. If you solve it, you solve every other problem in AI. However, to solve it, you need to solve every problem in AI.

It requires vision. It requires robotic control, motion and navigation, but also social interaction with the other drivers. At the end of the day, in order to drive a car well in the city, you need everything.

It requires an enormous amount of common sense.

Merging onto a highway is difficult and dangerous. It requires mastering a complicated set of physical and mental skills, having keen sensory and spatial awareness preparing for unpredictability and human fallibility.

Tusimple AI uses the images that are coming from the cameras with other sensors, LIDAR and radar. LIDAR (like Roborace mentioned earlier) is a laser rangefinder that is measuring the distance to objects 360 degrees around it gives us a three-dimensional picture of the world.

The central problem in AI is that human beings have common sense and computers do not. We take common sense for granted. We know how the world works.

Everything that we do in our daily life involves common sense.

The biggest changes that we face when we think about AI and the future of AI is what do we do when AI changes what the job functions are and what the workforce looks like.

People fear. And change that you don’t understand is terrifying for a lot of people. So the future of work is not humans being replaced by machines. It’s humans figuring out how to do their job better with the help of machines.

To most people shipping and logistics aren’t that sexy, but to an AI, it’s a dream. With over 14,000 people moving more than 8 million giant containers around the world. Automating this industry will make it more efficient, for sure, but also safer.

In the old design, you had the vessels activity mixing with the truck activity, competing for space congestion and safety issues. There is a need for a new model.

Talos Project

Robo hub is a premier robotics incubator at the University of Waterloo in Ontario, Canada.

One thing they’re doing is developing AI and robotics to use in environments that are unstructured, more human, like at home alone.

Talos is one of the most advanced humanoids on the planet, it can walk and talk and see you in 3d. But it can’t do most of those things out of the box. You really have to take it as a tool and teach it how to do a lot of these things.

They want to explore two different aspects of AI, first, to perceive objects in the world. It has cameras and its eyes. It also has a depth camera where it can actually see how far away things are within its vision, much like we do with our depth perception.

As soon as Talos has that ability to see, it can start building a map of its worlds, but it knows in 3d what all around it is.

Computer Vision is a way to mimic how we see the world. Differentiating a human in front of me, an object, a car, or a dog in the back for instance. In simple terms: having an image and understand what it represents with notion of distance.

The biggest misconception about robots is that they are more capable than they are that they’re more generalized than they are.

Valkyrie robot from NASA

You may have seen humanoid robots that can do our keyword, do backflips. Only ones that are really comparable are the Valkyrie that was designed by NASA of which there’s only two or three in the world.

Atlas robot from Boston Dynamics

The Atlas from Boston Dynamics also, but most of them have very few sensors. A lot of time they will be remote controlled or given a very clean prescriptive path.

In a situation like that what they’re doing is pushing the limits of the mechanical systems, with Talos, engineers try to push the software side.

The next step going forward is to start replacing all of our dumb blind dangerous machines. With machines that have sensors and vision built into them, they can work side by side with humans to be more productive and safer at the same time.

As humans, we have a sense of touch. And we have a sense of how much force we’re applying, and it’s being applied to us. Talos doesn’t have a skin, it’s hard plastic everywhere. You can only really sense what’s in its motors. So they have to teach it, how to translate that into sort of a sense of touch.

Within robotics, the areas where you’re really going to see the important advances are those environments that are relatively controlled and predictable. A good example is an Amazon warehouse with their Kiva robots.

In those environments, you already see lots of people, and lots of robots. And they are working together.

Once you have a process, and you’ve reduced it to an algorithm, you can replicate it so that if a robot can learn a whole new skill, you can copy that knowledge through the cloud to all the other robots. And now they all have that same skill.

It’s a whole new world whole new kind of economics. And we’re just beginning to understand its implications.

Preserving wildlife – Trailguard AI

35 000 African elephants are killed for ivory each year

Kurt Vonnegut said “science is magic that works”, it makes perfect sense. But it wasn’t long ago that we couldn’t understand what caused the entire species to become extinct. Why the ground shook or crops dried out?

One of the promises of AI is that it’ll enable us to use machine learning for prediction, and conservation, anything from protecting wildlife to anticipating earthquakes. Seeing the future may not prevent disasters, but I think we can all agree we need an upgrade.

Yet, every year 35,000 African elephants are killed by poachers. A single pound of ivory can be sold for $1500 on the black market. Elephants are about a decade away from extinction. But it’s not just about protecting one kind of animal or keystone species.

There are about 1 million other species which are in danger of being wiped out. Animals affect vegetation, biodiversity, shapes, ecosystems, all of which in turn impact people. It’s all connected. It’s not a stretch to say that protecting African elephants is about protects humanity and our future.

The Mara Triangle is the southwestern part of the Maasai Mara National Reserve, Kenya. The Mara Triangle is managed by the Mara Conservancy, under contract by the Trans-Mara county council, a local nonprofit organization formed by the local Maasai, and contains a number of anti-poaching units.

In the hopes of identifying poachers, Rangers have installed trap cameras and gps to track and protect animals. The man hours of having to look through 1000s of photos taking too much time and when they come it’s already been 3 days after the massacre.

That’s too late for those animals. They’re already dead.

Intel partnered with Resolve (a Non-Governmental Organization) to use technical innovation to solve some of the planet’s more pressing environmental problems. They began developing an AI powered anti-poaching device they call TrailGuard AI.

What they’ve done is input this very powerful computer chip called a vision processing unit, all the pictures that the camera trap is taking we’ll go through this VPU chip, and it’ll figure out if there is a person present or not, and only send you the pictures that where there’s a person.

It’s going to reduce like 95% of that pictures that doesn’t need to be checked. It has an AI algorithm in it, which looks at every single picture and sees if this is a picture that the park rangers are interested in.

The algorithm is fed 1000s of images of both humans and animals. It analyzes body shapes, facial geometry, movement, and other features until it learns how to distinguish one from the other from any angle in any light.

Image recognition is looking at an image and understanding what’s in it. It’s a perfect example of something that we take for granted. Human beings can just you know, look at an image understand what’s there without even thinking about it.

But it’s an incredibly hard problem. Amazingly, we have computers now which at least within certain parameters, can do this as well as human beings get.

The beauty of this system is here in the Mara, the amount of time it takes from when the poacher walks in front of the camera to when it says that’s a human and sends it to the headquarters.

Under two minutes, it’s possible to deploy arrange a team to that site and hopefully make an arrest before they’ve even had a chance to come in and kill those animals.

Fighting climate change by changing our consumption habits

Eating meat causes climate change. Sure, cows and other animals emit methane, harmful greenhouse gas. One third of the world’s farmable land is used to feed livestock. Look at it this way. Eating one burger has about the same environmental impact as driving a gas car for 10 miles. So, what do we do?

There is a new way a better way of making food and that’s plant based. Creating a plant-based alternative to popular animal proteins and plenty of companies already do that.

NotCo a Chilean startup is focusing on taste and perception which is more elusive. They are using AI to make some creative recipe to make people think they’re eating steak or eggs or milk when they’re not.

How to reproduce an animal-based food just by using plants?

Their algorithm can understand that there are clear connections between molecular components in food and the human perception of taste and textures. It’s all about “magic that works”.

The AI looks at the molecular makeup of foods like milk for instance, then it creates a list of ingredients from its most basic building blocks. Finally, using machine learning in a massive database, “Giuseppe” (name of their algorithm) then recombined select elements from plant-based foods to recreate the taste and texture of the original.

Humans are good at reasoning about two ingredients or maybe three ingredients at a time. But after that, it becomes very difficult for us to think about it. But the machine can start thinking about five ingredients, 10 ingredients and how they all go together, and what the flavor profiles will be. And that’s really the great power of the machine.

It’s about working the fibers. There are a lot of similarities between plant and animals because we share part of the chemical nature. All of them have DNA, all of them have RNA. All of them have proteins, lipids and carbohydrates.

Almonds, and a walnut match at 97% at the molecular level, just the 3% give the identity to your brain that the walnut is a walnut. They need to really identify the molecular features that give the identity and to reproduce them with the specific type of plants to make the right ingredients.

They are building an ecosystem that really will see food in a unique way because there is a ton of formulation to be made.

They identified that cabbage on a specific environment release a molecule that is very similar to lactose, and to your brain is kind of the same thing. But it’s red.

The algorithm still didn’t understand but one of the characteristics that they value in the sensory experience is color, taste and texture.

They made NotMayo (substitute of mayonnaise), that led them to NotMilk, NotIceCream and NotMeat and working on NotTuna (based on berries molecules).

We have devastated oceans. Generating our replacement for tuna is something that will move the needle in the world.

One of the things with large scale behavior changes it’s often not driven by nutrition or sustainability considerations. It’s driven by flavor.

If we can have that same property in healthy and sustainable foods that can be powerful.

92% of their consumers are non-vegetarian but they don’t care about sustainability. What they care about is eating tasty food.

Preventing Earthquakes with ShakeAlert

Can AI protect people from destructive forces of nature? Human beings can change, even if they tend to resist but the laws of nature don’t change.

If you take earthquakes, if we can predict when a one will happen. This can save a lot of lives. It’s been 319 years since the last big one here and seismically wise it’s the quietest song in the world.

The possibility that it’s quiet because it’s a natural event that going to happen anyway freaked a lot of people out.

The PNSN is a network of seismic sensors. And the idea is that they’re continuously monitoring for earthquake activity of all scales all the time.

An earthquake early warning system is not predicting an earthquake it’s about identifying those first quiet waves that are coming in. All their 400 of our instruments are sending a constant stream of data back to a data center.

With each of those 400 sensors there’s a lot of signals, which aren’t earthquakes. You could imagine that it’s generating this incredible volume of data. AI comes in would be filtering out what we call cultural noise, trains, trucks, people.

Vibration info from all 400 sensors in the region is fed into a machine learning algorithm, which is trained to differentiate earthquake tremors from construction or buses for instance.

Using machine learning and a huge database of known sounds, the AI can quickly sort through the noise of the natural world. Machine learning enables them to quickly find the signal of earthquakes.

They’re working hard on an earthquake early warning system and named ShakeAlert. The Washington Emergency Management Division here is one of the key immediate users. If a big earthquake coming in, then within a second or two, it will be identified in ShakeAlert’s computer algorithms to determine what area it’s going to affect with a damage assessment and then create a warning for them.

Depending on how close you are to the source, the warnings can be very short, like less than a second to maybe as much as three minutes.

Agritech In Netherland

Robotics is already starting to transform farming.

The Netherlands is a very tiny country compared to the United States. Some people call them the Silicon Valley of Agriculture.

Turns out this tiny European country is now the world’s second largest exporter of fresh food, their secret is “Vertical Farming”

Their greenhouses produce seven times more tomatoes per acre than a traditional farm. Efficiency in a greenhouse is determined by many different things. It starts with the roots, they don’t grow in the soil anymore, they grow on an artificial substrate.

They give exactly what the plant needs. Not more, not less. But it is not just the roots. It’s the above ground environment that humidity, carbon dioxide concentration.

Sensors hidden among the plants generate a constant stream of data, including temperature, moisture, and soil nutrients. It’s all about engineering, climate, and optimizing food.

It’s very important to collect the data and see some trends in it so they can better organized climate control in those controlled environments.

They must look every day. As it is massive in terms of data. You cannot just look at an Excel file. That’s where the artificial intelligence comes in to make use of all those data to control the grump.

Artificial Photosynthesis

AI is what makes the magic by helping them see what they otherwise could not. Using ultraviolet and infrared light, training AI to measure photosynthesis in plants.

It’s about leveraging technology to bridge the gap between the raw data and the desired outcome.

Using special cameras with computer vision, they measure the light reflected by the leaves. This allows them to see how much energy a plant is generating on a molecular level. It is chlorophyll fluorescence as in measuring how plants turns lights into growth.

Some of the leading applications of computer vision are in things like agriculture and manufacturing. One of the great opportunities is that while we can see visible light, there are many other parts of the electromagnetic spectrum like X rays, infrared ultraviolet, it’s possible to build sensors for those. And this opens a whole space of possibilities for machines to solve problems that we humans can’t.

They can see all the individual leaves and the efficiency of the photosynthesis. This is very efficient if the plants image can be catch by a camera. However, that can only be done when the plants are not too big.

Digital Laoban