Autonomous cars are the next big thing. Well, it’s actually the current-to-next big thing, but we digress. The point is that people are apparently bored of driving cars and want systems and programs to drive them around everywhere so that they can sit in the back seat and pretend to be Donald Trump.
The only problem with self-driving cars is that they aren’t able to function properly in the real world as you simply can’t replace human instinct and reaction when things on the road get challenging. Why? Because cars can’t see in real time the way we do, so it makes it very difficult to make split second decisions when that double parked car pulls out suddenly.
Apparently, now they can.
Thanks to SegNet, a new system created by the University of Cambridge, your dreams of being driven around by software might just come into fruition. SegNet can “read” the road in front if it, allowing it to differentiate various elements like the sky, other cars, road markings and even people.
SegNet “sees” the road in an RGB image and then classifies different layers and objects using a Bayesian analysis of the scene. According to the release, the system can “take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories in real time”. It can also apparently deal with light, shadow and night environments and currently labels more than 90% of pixels correctly.
Unlike laser/radar-guided systems that require expensive base sensors which cost more than the car itself, SegNet has been “trained” by a group of Cambridge undergraduates who manually labelled every pixel in over 5,000 images. Once that was complete, the researchers proceeded to train the system for over 2 days, after which one of the students notes that it is surprisingly good at recognising things in an image.
From our testing, we tend to agree. It does a remarkable job at differentiating between the 12 elements though there is quite a lot of polishing to do, especially at night. As this system is “trained” though, it can only get better over time as it sees more images and learns to recognise more elements.
One of the biggest problems about driving around in Malaysia is that the lines are never where they should be. Add that to the fact that people double park like there is no tomorrow and that the roads have potholes the size of Canada, even human beings would have trouble navigating our treacherous streets, what more a software?
With this though, there is the potential to train the system to recognise these little nuances that make up our little snowflake of a country. Given the right circumstances, the possibilities are quite endless. Although the researchers say that this is still far from being able to be implemented into autonomous cars, we could possibly see this in a domestic robot – like a robot vacuum cleaner – in the short term.
From where we stand, trust in an autonomous car seems to be the biggest hurdle for people to overcome. Would you relinquish control of your car in a highway to a computer program developed by some bloke in a laboratory?
If you want to test out this system for yourself, head on over to their website and you can upload a picture of your own to be scanned by SegNet, or you can browse through the sample images they’ve uploaded.
[SOURCE, VIA, IMAGE SOURCE]
Malaysia Airlines has temporarily grounded its brand new Airbus A330neo after completing four commercial flights.…
Pro-Net recently revealed that you only need to service the new Proton e.MAS 7 EV…
The Proton e.MAS 7 is one of the most value for money SUVs at the…
Samsung has announced that it will be holding its press conference titled "AI for All:…
Modern smartphones are very capable computing devices, thanks to powerful hardware trickling down the price…
If you're a CelcomDigi Postpaid 5G customer and can't get fibre broadband for your home,…
This website uses cookies.