All public logs

Combined display of all available logs of Computer Science Wiki. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).

Logs
(newest | oldest) View (newer 20 | ) (20 | 50 | 100 | 250 | 500)
  • 14:07, 29 January 2024 Mr. MacKenty talk contribs deleted page User:Smurpani (content was: "Can't seem to login with the account I made previously... Thanks, Sid", and the only contributor was "Mr. MacKenty" (talk))
  • 14:07, 29 January 2024 Mr. MacKenty talk contribs deleted page User:18barlow d (content was: "I'm pretty cool. I enjoy reading and eating lots of food. Mexican food is some of my favorite. I also love cookies.", and the only contributor was "Mr. MacKenty" (talk))
  • 14:25, 31 May 2023 Mr. MacKenty talk contribs uploaded File:Parts of AI.png
  • 14:25, 31 May 2023 Mr. MacKenty talk contribs created page File:Parts of AI.png
  • 12:32, 30 May 2023 Mr. MacKenty talk contribs created page Tracking (Created page with "In the context of robotics and particularly in systems like Simultaneous Localization and Mapping (SLAM) or Visual SLAM (vSLAM), "tracking" typically refers to the process of continuously estimating the robot's motion and position over time based on its sensor data. Here's how tracking might work in a vSLAM system: # The robot captures a sequence of images with its camera as it moves through the environment. # For each new image, the robot identifies features (distinct...")
  • 12:12, 30 May 2023 Mr. MacKenty talk contribs created page Loop closure (Created page with "Loop closure is an important concept in the field of robotics, particularly in relation to the Simultaneous Localization and Mapping (SLAM) problem. As a robot moves through an environment, it builds a map of the environment and uses that map to estimate its location within it. However, as the robot moves, small errors in its motion estimates can accumulate over time, leading to drift in the estimated trajectory and the map. The idea of loop closure is to correct thi...")
  • 12:11, 30 May 2023 Mr. MacKenty talk contribs created page Local mapping (Created page with "Local mapping is a concept in robotics, particularly in relation to Simultaneous Localization and Mapping (SLAM) and Visual SLAM (vSLAM), where the robot builds a smaller, more immediate map of its surroundings, often referred to as a local map. The idea is to focus computational resources on understanding the robot's immediate surroundings in detail, rather than attempting to map the entire environment at once. This local map is continuously updated as the robot moves...")
  • 12:09, 30 May 2023 Mr. MacKenty talk contribs created page Initialization (Created page with "In the context of robotics and especially in algorithms like Simultaneous Localization and Mapping (SLAM) or Visual SLAM (vSLAM), "Initialization" refers to the process of setting up the initial conditions or starting point for the algorithm. At the start of SLAM or vSLAM, the robot doesn't know anything about its environment or its position within that environment. However, to begin the process of mapping and localization, it needs some kind of initial guess or estimat...")
  • 12:06, 30 May 2023 Mr. MacKenty talk contribs created page Visual simultaneous localization and mapping (vSLAM) modules (Created page with "Visual Simultaneous Localization and Mapping, or vSLAM, is a variant of the general SLAM problem where the primary sensor data comes from a camera or multiple cameras. This technique uses visual information to create a map of the environment while also keeping track of the robot's location within the map. The "modules" in a vSLAM system might refer to the individual components or stages of the vSLAM process. The exact modules can vary depending on the specific vSLAM alg...")
  • 12:05, 30 May 2023 Mr. MacKenty talk contribs created page Sensor fusion model (Created page with "Sensor fusion is a method used in robotics and automation that involves merging data from different sensors to improve the understanding of the environment. This process can reduce uncertainty, improve accuracy, and make the system more robust to failures of individual sensors. A sensor fusion model, then, is a mathematical and computational model that describes how to combine the data from different sensors. Here's an example to illustrate the concept: Imagine you ha...")
  • 12:03, 30 May 2023 Mr. MacKenty talk contribs created page Simultaneous localization and mapping (SLAM) (Created page with "Simultaneous Localization and Mapping, or SLAM, is a computational problem in the field of robotics. As the name implies, it's about doing two things at the same time: # '''Localization''': Determining where a robot is located in an environment. # '''Mapping''': Building a map of that environment. What makes SLAM challenging is that it's a chicken-and-egg problem: to know where you are (localization), you need a map, but to create a map, you need to know where you are....")
  • 11:59, 30 May 2023 Mr. MacKenty talk contribs created page Robot drift (Created page with ""Robot drift" is a term often used in the context of robotics and refers to the accumulated error in a robot's estimated position and orientation over time. This error, or "drift", can occur when a robot is using sensors like wheel encoders or Inertial Measurement Units (IMUs) to estimate its motion. Both these methods involve integrating sensor measurements over time to calculate position, but small errors in these measurements can accumulate, leading to larger and lar...")
  • 11:58, 30 May 2023 Mr. MacKenty talk contribs created page Rigid pose estimation (RPE) (Created page with "Rigid Pose Estimation (RPE) is a concept in computer vision and robotics that involves determining the position and orientation (the "pose") of an object that does not deform or change shape — in other words, a "rigid" object. The term 'rigid' indicates that the distance between any two points on the object remains constant over time, regardless of the object's movement or orientation. In the context of robotics, pose estimation often refers to estimating the pose of...")
  • 11:56, 30 May 2023 Mr. MacKenty talk contribs created page Relocalization (Created page with "Relocalization is a critical concept in robotics, specifically in the context of autonomous navigation and Simultaneous Localization and Mapping (SLAM). It refers to the ability of a robot to determine its current location in a map that it previously built or in a known environment, particularly after it has lost track of its position due to an error, disturbance, or after it has been manually moved (also known as the "kidnapped robot" problem). There are many reasons w...")
  • 11:54, 30 May 2023 Mr. MacKenty talk contribs created page Odometry sensor (Created page with "An odometry sensor is a device used to estimate the change in position over time of a vehicle, like a car or a robot, based on data from its own sensors. The term "odometry" comes from the combination of "hodos", meaning path or way in Greek, and "metron", meaning measure. Therefore, it's all about measuring the path a vehicle takes. There are several types of odometry sensors, and they work in different ways: # '''Wheel Encoders''': In many robots, especially wheeled...")
  • 11:53, 30 May 2023 Mr. MacKenty talk contribs created page Object occlusion (Created page with "Object occlusion in the context of computer vision refers to the event where a part or all of an object in the scene is hidden from view by some other object in the scene. In simple words, when an object is in front of another object, blocking it from view, we say that the second object is occluded. For example, imagine you are looking at a photograph of a crowd of people. Some of those people may be standing in front of others, preventing you from seeing the people beh...")
  • 11:51, 30 May 2023 Mr. MacKenty talk contribs created page Light detection and ranging (LIDAR) (Created page with "Light Detection and Ranging, more commonly known as LiDAR, is a method of remote sensing that uses light in the form of a pulsed laser to measure distances to an object. These light pulses, combined with other data recorded by the airborne system, generate precise, three-dimensional information about the shape of the Earth and its surface characteristics. Here's a simplified explanation of how it works: # A LiDAR system sends out a pulse of light, usually in the form o...")
  • 11:50, 30 May 2023 Mr. MacKenty talk contribs created page Key points/pairs (Created page with "Key points, also known as feature points or interest points, are distinct and unique points in an image that are easy to find and accurately describe. These points are usually selected because they represent corners, edges, or other interesting aspects of the image, and they are used in many computer vision tasks for things like object recognition, image alignment, and 3D reconstruction. When multiple images are used (for example, in a video or a sequence of frames take...")
  • 11:48, 30 May 2023 Mr. MacKenty talk contribs created page Keyframe selection (Created page with "Keyframe selection is a term commonly used in the field of computer vision, especially in video processing and robotics. A keyframe is a frame in a sequence of frames (like a video or a series of images) that contains important or critical data. In video compression, for example, keyframes (also known as I-frames) are the frames from which subsequent frames are based. These keyframes serve as reference points and the frames in between are often compressed by storing onl...")
  • 11:46, 30 May 2023 Mr. MacKenty talk contribs created page Inertial measurement unit (IMU) (Created page with "An Inertial Measurement Unit, or IMU, is a device that measures and reports on a vehicle's velocity, orientation, and gravitational forces, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. IMUs are typically used to aid in navigation and tracking systems, particularly when GPS data is unavailable or unreliable. Let's break down the components of an IMU: # '''Accelerometers:''' These measure linear acceleration. However, they can't disting...")
(newest | oldest) View (newer 20 | ) (20 | 50 | 100 | 250 | 500)