User contributions for Mr. MacKenty

Search for contributionsExpandCollapse
(newest | oldest) View (newer 50 | ) (20 | 50 | 100 | 250 | 500)

13 February 2024

5 February 2024

31 January 2024

30 January 2024

22 January 2024

15 November 2023

17 October 2023

10 October 2023

2 September 2023

31 May 2023

30 May 2023

  • 12:3212:32, 30 May 2023 diff hist +1,728 N TrackingCreated page with "In the context of robotics and particularly in systems like Simultaneous Localization and Mapping (SLAM) or Visual SLAM (vSLAM), "tracking" typically refers to the process of continuously estimating the robot's motion and position over time based on its sensor data. Here's how tracking might work in a vSLAM system: # The robot captures a sequence of images with its camera as it moves through the environment. # For each new image, the robot identifies features (distinct..." current
  • 12:3112:31, 30 May 2023 diff hist +16 RoboticsNo edit summary current
  • 12:1212:12, 30 May 2023 diff hist +1,894 N Loop closureCreated page with "Loop closure is an important concept in the field of robotics, particularly in relation to the Simultaneous Localization and Mapping (SLAM) problem. As a robot moves through an environment, it builds a map of the environment and uses that map to estimate its location within it. However, as the robot moves, small errors in its motion estimates can accumulate over time, leading to drift in the estimated trajectory and the map. The idea of loop closure is to correct thi..." current
  • 12:1112:11, 30 May 2023 diff hist +1,693 N Local mappingCreated page with "Local mapping is a concept in robotics, particularly in relation to Simultaneous Localization and Mapping (SLAM) and Visual SLAM (vSLAM), where the robot builds a smaller, more immediate map of its surroundings, often referred to as a local map. The idea is to focus computational resources on understanding the robot's immediate surroundings in detail, rather than attempting to map the entire environment at once. This local map is continuously updated as the robot moves..." current
  • 12:0912:09, 30 May 2023 diff hist +1,883 N InitializationCreated page with "In the context of robotics and especially in algorithms like Simultaneous Localization and Mapping (SLAM) or Visual SLAM (vSLAM), "Initialization" refers to the process of setting up the initial conditions or starting point for the algorithm. At the start of SLAM or vSLAM, the robot doesn't know anything about its environment or its position within that environment. However, to begin the process of mapping and localization, it needs some kind of initial guess or estimat..." current
  • 12:0612:06, 30 May 2023 diff hist +482 N Visual simultaneous localization and mapping (vSLAM) modulesCreated page with "Visual Simultaneous Localization and Mapping, or vSLAM, is a variant of the general SLAM problem where the primary sensor data comes from a camera or multiple cameras. This technique uses visual information to create a map of the environment while also keeping track of the robot's location within the map. The "modules" in a vSLAM system might refer to the individual components or stages of the vSLAM process. The exact modules can vary depending on the specific vSLAM alg..." current
  • 12:0512:05, 30 May 2023 diff hist +2,157 N Sensor fusion modelCreated page with "Sensor fusion is a method used in robotics and automation that involves merging data from different sensors to improve the understanding of the environment. This process can reduce uncertainty, improve accuracy, and make the system more robust to failures of individual sensors. A sensor fusion model, then, is a mathematical and computational model that describes how to combine the data from different sensors. Here's an example to illustrate the concept: Imagine you ha..." current
  • 12:0312:03, 30 May 2023 diff hist +1,723 N Simultaneous localization and mapping (SLAM)Created page with "Simultaneous Localization and Mapping, or SLAM, is a computational problem in the field of robotics. As the name implies, it's about doing two things at the same time: # '''Localization''': Determining where a robot is located in an environment. # '''Mapping''': Building a map of that environment. What makes SLAM challenging is that it's a chicken-and-egg problem: to know where you are (localization), you need a map, but to create a map, you need to know where you are...." current
  • 11:5911:59, 30 May 2023 diff hist +1,972 N Robot driftCreated page with ""Robot drift" is a term often used in the context of robotics and refers to the accumulated error in a robot's estimated position and orientation over time. This error, or "drift", can occur when a robot is using sensors like wheel encoders or Inertial Measurement Units (IMUs) to estimate its motion. Both these methods involve integrating sensor measurements over time to calculate position, but small errors in these measurements can accumulate, leading to larger and lar..." current
  • 11:5811:58, 30 May 2023 diff hist +1,900 N Rigid pose estimation (RPE)Created page with "Rigid Pose Estimation (RPE) is a concept in computer vision and robotics that involves determining the position and orientation (the "pose") of an object that does not deform or change shape — in other words, a "rigid" object. The term 'rigid' indicates that the distance between any two points on the object remains constant over time, regardless of the object's movement or orientation. In the context of robotics, pose estimation often refers to estimating the pose of..." current
  • 11:5611:56, 30 May 2023 diff hist +1,386 N RelocalizationCreated page with "Relocalization is a critical concept in robotics, specifically in the context of autonomous navigation and Simultaneous Localization and Mapping (SLAM). It refers to the ability of a robot to determine its current location in a map that it previously built or in a known environment, particularly after it has lost track of its position due to an error, disturbance, or after it has been manually moved (also known as the "kidnapped robot" problem). There are many reasons w..." current
  • 11:5411:54, 30 May 2023 diff hist +1,923 N Odometry sensorCreated page with "An odometry sensor is a device used to estimate the change in position over time of a vehicle, like a car or a robot, based on data from its own sensors. The term "odometry" comes from the combination of "hodos", meaning path or way in Greek, and "metron", meaning measure. Therefore, it's all about measuring the path a vehicle takes. There are several types of odometry sensors, and they work in different ways: # '''Wheel Encoders''': In many robots, especially wheeled..." current
  • 11:5311:53, 30 May 2023 diff hist +1,331 N Object occlusionCreated page with "Object occlusion in the context of computer vision refers to the event where a part or all of an object in the scene is hidden from view by some other object in the scene. In simple words, when an object is in front of another object, blocking it from view, we say that the second object is occluded. For example, imagine you are looking at a photograph of a crowd of people. Some of those people may be standing in front of others, preventing you from seeing the people beh..." current
  • 11:5111:51, 30 May 2023 diff hist +1,606 N Light detection and ranging (LIDAR)Created page with "Light Detection and Ranging, more commonly known as LiDAR, is a method of remote sensing that uses light in the form of a pulsed laser to measure distances to an object. These light pulses, combined with other data recorded by the airborne system, generate precise, three-dimensional information about the shape of the Earth and its surface characteristics. Here's a simplified explanation of how it works: # A LiDAR system sends out a pulse of light, usually in the form o..." current
  • 11:5011:50, 30 May 2023 diff hist +1,581 N Key points/pairsCreated page with "Key points, also known as feature points or interest points, are distinct and unique points in an image that are easy to find and accurately describe. These points are usually selected because they represent corners, edges, or other interesting aspects of the image, and they are used in many computer vision tasks for things like object recognition, image alignment, and 3D reconstruction. When multiple images are used (for example, in a video or a sequence of frames take..." current
(newest | oldest) View (newer 50 | ) (20 | 50 | 100 | 250 | 500)