Autonomous cars are the next big thing. Well, it’s actually the current-to-next big thing, but we digress. The point is that people are apparently bored of driving cars and want systems and programs to drive them around everywhere so that they can sit in the back seat and pretend to be Donald Trump.
The only problem with self-driving cars is that they aren’t able to function properly in the real world as you simply can’t replace human instinct and reaction when things on the road get challenging. Why? Because cars can’t see in real time the way we do, so it makes it very difficult to make split second decisions when that double parked car pulls out suddenly.
Apparently, now they can.
Thanks to SegNet, a new system created by the University of Cambridge, your dreams of being driven around by software might just come into fruition. SegNet can “read” the road in front if it, allowing it to differentiate various elements like the sky, other cars, road markings and even people.
SegNet “sees” the road in an RGB image and then classifies different layers and objects using a Bayesian analysis of the scene. According to the release, the system can “take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories in real time”. It can also apparently deal with light, shadow and night environments and currently labels more than 90% of pixels correctly.
Unlike laser/radar-guided systems that require expensive base sensors which cost more than the car itself, SegNet has been “trained” by a group of Cambridge undergraduates who manually labelled every pixel in over 5,000 images. Once that was complete, the researchers proceeded to train the system for over 2 days, after which one of the students notes that it is surprisingly good at recognising things in an image.
From our testing, we tend to agree. It does a remarkable job at differentiating between the 12 elements though there is quite a lot of polishing to do, especially at night. As this system is “trained” though, it can only get better over time as it sees more images and learns to recognise more elements.
One of the biggest problems about driving around in Malaysia is that the lines are never where they should be. Add that to the fact that people double park like there is no tomorrow and that the roads have potholes the size of Canada, even human beings would have trouble navigating our treacherous streets, what more a software?
With this though, there is the potential to train the system to recognise these little nuances that make up our little snowflake of a country. Given the right circumstances, the possibilities are quite endless. Although the researchers say that this is still far from being able to be implemented into autonomous cars, we could possibly see this in a domestic robot – like a robot vacuum cleaner – in the short term.
From where we stand, trust in an autonomous car seems to be the biggest hurdle for people to overcome. Would you relinquish control of your car in a highway to a computer program developed by some bloke in a laboratory?
If you want to test out this system for yourself, head on over to their website and you can upload a picture of your own to be scanned by SegNet, or you can browse through the sample images they’ve uploaded.
[SOURCE, VIA, IMAGE SOURCE]
This post is brought to you by Samsung. This is the Samsung Bespoke AI Laundry…
Dongfeng Box is now officially available in Malaysia. Launched in partnership with Central Auto Distributors…
Edaran Tan Chong Motor (ETCM) has announced that the Nissan Kicks e-Power is now open…
TikTok in partnership with Communications and Multimedia Content Forum of Malaysia (CMCF) have recently organised…
Tesla owners in Malaysia have reported that their vehicles can now perform the Autopark feature.…
After unveiling its latest smartphones, the Asus ROG Phone 9 series, to the world, Asus…
This website uses cookies.