MIT and Toyota release a new innovative dataset that can accelerate autonomous driving research called DriveSeg. It is free for the research and academic community and comes in two parts. DriveSeg nearly is three minutes of video taken during a daytime trip around the streets of Cambridge, Massachusetts. It consists of 67 ten-second video clips, or 21,000 video frames, drawn from MIT Advanced Video Technologies Consortium data. So here we have discussed more about self driving car and DriveSeg.
The present condition of self-driving car
Training a machine so that it can become a self-driving car is not an easy task. However, this technology has already introduced in front of us and is in the developing phase. Support vector machine(SVM) with the histogram of oriented gradients, principle component analysis, Bayes decision rule, and k-nearest neighbor are the most used algorithm for self-driving car. And the latest news is that MIT and Toyota release new innovative dataset ie DriveSeg that can accelerate autonomous driving research. Some of intoduced self driving cars are ,
- 2019 Toyota RAV4.
- 2019 Nissan Leaf.
- 2019 Tesla Model 3.
- 2020 Volvo XC60.
- 2019 BMW 5 Series.
- 2019 Cadillac CT6.
- 2020 Lexus LS.
What is DriveSeg?
After the release of dataset, MIT and Toyota is now working to advance research in autonomous driving systems or self driving car. It contains more precise, pixel-level representations of many common road objects. Through the lens of a continuous video driving scene, this full scene segmentation can be particularly helpful for identifying more amorphous objects – that do not always have such defined and uniform shapes.This is MIT and Toyota great contribution to the development of the self-driving car.
Self-driving car, is a vehicle that can sense its environment and moving safely with little or no human input. So to train this type of data, we need a dataset, so DriveSeg is a new dataset that is an excellent contribution in this field that is given by MIT and Toyota. It is all due to this excellent contribution given by the AgeLab of MIT Center for Transportation and Logistics, and the Toyota Collaborative Safety Research Center (CSRC) where both tech giants MIT and Toyota jointly contributes to dataset DriveSeg.
Developers says DriveSeg is full scene segmentation that is more helpful in identifying amorphous objects – which do not have a uniform, defined shapes. So it can give gret accuracy and ease in training. The main application of DriveSeg is that it allow for exploration of the value of temporal dynamics information for full scene segmentation in dynamic, real-world operating environments.
In simple word is a open source dataset contributed by MIT and Toyoto in the field of self-driving car. This dataset is very innovative which is great achievement in the development of self-driving car. The video dataset helps to train machine in a way so it can work against many amorphous object, with better accuracy. This dataset is open source so other who are working in this project may also take lots of benefit from it.
DriveSeg technical summary
- Video data is Sixty seven 10 second 720P (1280×720) 30 fps videos (20,100 frames) and,
- Class definitions (12) – vehicle, pedestrian, road, sidewalk, bicycle, motorcycle, building, terrain (horizontal vegetation), vegetation (vertical vegetation), pole, traffic light, and traffic sign
Creators says, “By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”
From the journey from 1920 to present, this achievement in the field of a self-driving car is excellent, and the future of self-driving cars looks promising.
You can get more technical specifications of this dataset here.