版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、<p> Robot companion localization at home and in the office</p><p> Arnoud Visser J¨urgen Sturm Frans Groen</p><p> Intelligent Autonomous Systems, Universiteit van Amsterdam</p&
2、gt;<p> http://www.science.uva.nl/research/ias/</p><p><b> Abstract</b></p><p> The abilities of mobile robots depend greatly on the performance of basic skills such as<
3、/p><p> vision and localization. Although great progress has been made to explore and map extensive</p><p> public areas with large holonomic robots on wheels, less attention is paid on the local
4、ization</p><p> of a small robot companion in a confined environment as a room in office or at home. In</p><p> this article, a localization algorithm for the popular Sony entertainment robot
5、Aibo inside a</p><p> room is worked out. This algorithm can provide localization information based on the natural</p><p> appearance of the walls of the room. The algorithm starts making a sc
6、an of the surroundings by</p><p> turning the head and the body of the robot on a certain spot. The robot learns the appearance</p><p> of the surroundings at that spot by storing color transi
7、tions at different angles in a panoramic</p><p> index. The stored panoramic appearance is used to determine the orientation (including a</p><p> confidence value) relative to the learned spot
8、 for other points in the room. When multiple</p><p> spots are learned, an absolute position estimate can be made. The applicability of this kind of</p><p> localization is demonstrated in two
9、 environments: at home and in an office.</p><p> 1 Introduction</p><p> 1.1 Context</p><p> Humans orientate easily in their natural environments. To be able to interact with hum
10、ans, mobile</p><p> robots also need to know where they are. Robot localization is therefore an important basic skill</p><p> of a mobile robot, as a robot companion like the Aibo. Yet, the So
11、ny entertainment software</p><p> contained no localization software until the latest release1. Still, many other applications for a</p><p> robot companion - like collecting a news paper from
12、 the front door - strongly depend on fast,</p><p> accurate and robust position estimates. As long as the localization of a walking robot, like the</p><p> Aibo, is based on odometry after spa
13、rse observations, no robust and accurate position estimates</p><p> can be expected.</p><p> Most of the localization research with the Aibo has concentrated on the RoboCup. At the</p>
14、<p> RoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localization</p><p> accuracies below six centimeters [6, 8].</p><p> The price that the
15、se RoboCup approaches pay is their total dependency on artificial landmarks</p><p> of known shape, positions and color. Most algorithms even require manual calibration of the actual</p><p> c
16、olors and lighting conditions used on a field and still are quite susceptible for disturbances around</p><p> the field, as for instance produced by brightly colored clothes in the audience.</p><
17、p> The interest of the RoboCup community in more general solutions has been (and still is) growing</p><p> over the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example of
18、</p><p> the state-of-the-art in this community. For this challenge additional landmarks with bright colors</p><p> are placed around the borders on a RoboCup field. The robots get one minute
19、to walk around and</p><p> explore the field. Then, the normal beacons and goals are covered up or removed, and the robot</p><p> must then move to a series of five points on the field, using
20、the information learnt during the first</p><p> 1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation</p><p> 2RoboCup Four Legged League homepage, l
21、ast accessed in May 2006, http://www.tzi.de/4legged</p><p> 3Details about the Simultaneous Localization and Mapping challenge can be found at http://www.tzi.de/</p><p> 4legged/pub/Website/Do
22、wnloads/Challenges2005.pdf</p><p><b> 1</b></p><p> minute. The winner of this challenge [6] reached the five points by using mainly the information of</p><p> the fi
23、eld lines. The additional landmarks were only used to break the symmetry on the soccer field.</p><p> A more ambitious challenge is formulated in the newly founded RoboCup @ Home league4. In</p><
24、p> this challenge the robot has to safely navigate toward objects in the living room environment. The</p><p> robot gets 5 minutes to learn the environment. After the learning phase, the robot has to vi
25、sit 4</p><p> distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes.</p><p> 1.2 Related Work</p><p> Many researchers have worked on
26、the SLAM problem in general, for instance on panoramic images</p><p> [1, 2, 4, 5]. These approaches are inspiring, but only partially transferable to the 4-Legged league.</p><p> The Aibo is
27、not equipped with an omni-directional high-quality camera. The camera in the nose</p><p> has only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further,</p><p&
28、gt; the horizon in the images is not a constant, but depends on the movements of the head and legs of</p><p> the walking robot. So each image is taken from a slightly different perspective, and the path o
29、f the</p><p> camera center is only in first approximation a circle. Further, the images are taken while the head</p><p> is moving. When moving at full speed, this can give a difference of 5.
30、4 degrees between the top and</p><p> the bottom of the image. So the image seems to be tilted as a function of the turning speed of the</p><p> head. Still, the location of the horizon can be
31、 calculated by solving the kinematic equations of the</p><p> robot. To process the images, a 576 Mhz processor is available in the Aibo, which means that only</p><p> simple image processing
32、algorithms are applicable. In practice, the image is analyzed by following</p><p> scan-lines with a direction relative the calculated horizon. In our approach, multiple sectors above</p><p>
33、the horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One of</p><p> the general approaches [3] divides the image in multiple sectors, but this image is omni-direction
34、al</p><p> and the sector is analyzed on the average color of the sector. Our method analysis each sector on</p><p> a different characteristic feature: the frequency of colortransitions.</
35、p><p> 2 Approach</p><p> The main idea is quite intuitive: we would like the robot to generate and store a 360o circular</p><p> panorama image of its environment while it is in th
36、e learning phase. After that, it should align</p><p> each new image with the stored panorama, and from that the robot should be able to derive its</p><p> relative orientation (in the localiz
37、ation phase). This alignment is not trivial because the new image</p><p> can be translated, rotated, stretched and perspectively distorted when the robot does not stand at</p><p> the point w
38、here the panorama was originally learned [11].</p><p> Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolution</p><p> images. Therefore a reduc
39、ed feature space is designed so that the computations become</p><p> tractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned.</p><p> Figure 1 gives a q
40、uick overview of the algorithm’s main components.</p><p> The Aibo performs a calibration phase before the actual learning can start. In this phase the</p><p> Aibo first decides on a suitable
41、 camera setting (i.e. camera gain and the shutter setting) based</p><p> on the dynamic range of brightness in the autoshutter step. Then it collects color pixels by</p><p> turning its head f
42、or a while and finally clusters these into 10 most important color classes in the</p><p> color clustering step using a standard implementation of the Expectation-Maximization algorithm</p><p>
43、 assuming a Gaussian mixture model [9]. The result of the calibration phase is an automatically</p><p> generated lookup-table that maps every YCbCr color onto one of the 10 color classes and can</p>
44、<p> therefore be used to segment incoming images into its characteristic color patches (see figure 2(a)).</p><p> These initialization steps are worked out in more detail in [10].</p><p&
45、gt; 4RoboCup @ Home League homepage, last accessed in May 2006, http://www.ai.rug.nl/robocupathome/</p><p> 5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily pr
46、ocess images</p><p> at the full Aibo frame rate (30fps).</p><p> Figure 1: Architecture of our algorithm</p><p> (a) Unsupervised learned color segmentation.</p><p>
47、; (b) Sectors and frequent color transitions</p><p> visualized.</p><p> Figure 2: Image processing: from the raw image to sector representation. This conversion consumes</p><p>
48、 approximately 6 milliseconds/frame on a Sony Aibo ERS7.</p><p> 2.1 Sector signature correlation</p><p> Every incoming image is now divided into its corresponding sectors6. The sectors are
49、located above</p><p> the calculated horizon, which is generated by solving the kinematics of the robot. Using the lookup</p><p> table from the unsupervised learned color clustering, we can c
50、ompute the sector features by counting</p><p> per sector the transition frequencies between each two color classes in vertical direction. This yields</p><p> the histograms of 10x10 transitio
51、n frequencies per sector, which we subsequently discretize into 5</p><p> logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for each</p><p> sector.
52、Some sectors have multiple color transitions in the most frequent bin, other sectors have a</p><p> single or no dominant color transition. This is only visualization; not only the most frequent color</p
53、><p> transitions, but the frequency of all 100 color transitions are used as characteristic feature of the</p><p><b> sector.</b></p><p> In the learning phase we estim
54、ate all these 80x(10x10) distributions7 by turning the head and</p><p> body of the robot. We define a single distribution for a currently perceived sector by</p><p> Pcurrent (i, j, bin) =<
55、;/p><p><b> _</b></p><p> 1 discretize (freq (i, j)) = bin</p><p> 0 otherwise</p><p><b> (1)</b></p><p> where i, j are indices
56、of the color classes and bin one of the five frequency bins. Each sector is</p><p> seen multiple times and the many frequency count samples are combined into a distribution learned</p><p> 68
57、0 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between</p><p> 10 and 12 sectors per image (depending on the head pan/tilt)</p><p> 7When
58、 we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10</p><p> colors)x(5 bins)x(2 byte) = 80 KB of memory.</p><p> for that sector by the equation:
59、</p><p> Plearned (i, j, bin) = Pcountsector (i, j, bin)</p><p> bin2frequencyBins</p><p> countsector (i, j, bin)</p><p><b> (2)</b></p><p&g
60、t; After the learning phase we can simply multiply the current and the learned distribution to get</p><p> the correlation between a currently perceived and a learned sector:</p><p> Corr(Pcu
61、rrent, Plearned) =</p><p><b> Y</b></p><p> i,j2colorClasses,</p><p> bin2frequencyBins</p><p> Plearned (i, j, bin) ·Pcurrent (i, j, bin) (3)</
62、p><p> 2.2 Alignment</p><p> After all the correlations between the stored panorama and the new image signatures were evaluated,</p><p> we would like to get an alignment between th
63、e stored and seen sectors so that the overall likelihood</p><p> of the alignment becomes maximal. In other words, we want to find a diagonal path with the</p><p> minimal cost through the cor
64、relation matrix. This minimal path is indicated as green dots in figure</p><p> 3. The path is extended to a green line for the sectors that are not visible in the latest perceived</p><p><b
65、> image.</b></p><p> We consider the fitted path to be the true alignment and extract the rotational estimate 'robot</p><p> from the offset from its center pixel to the diagonal
66、 (_sectors):</p><p><b> ?'robot =</b></p><p><b> 360_</b></p><p><b> 80</b></p><p> _sectors (4)</p><p> This
67、rotational estimate is the difference between the solid green line and the dashed white line</p><p> in figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again a<
68、/p><p> path through the correlation matrix far away from the best-fitted path.</p><p><b> SNR =</b></p><p><b> P</b></p><p> (x,y)2minimumPath
69、</p><p> Corr(x, y)</p><p><b> P</b></p><p> (x,y)2noisePath</p><p> Corr(x, y)</p><p><b> (5)</b></p><p> The n
70、oise path is indicated in figure 3 with red dots.</p><p> (a) Robot standing on the trained spot (matching</p><p> line is just the diagonal)</p><p> (b) Robot turned right by 45
71、 degrees (matching</p><p> line displaced to the left)</p><p> F igure 3: Visualization of the alignment step while the robot is scanning with its head. The</p><p> green solid l
72、ine marks the minimum path (assumed true alignment) while the red line marks the</p><p> second-minimal path (assumed peak noise). The white dashed line represents the diagonal, while</p><p>
73、the orange halter illustrates the distance between the found alignment and the center diagonal</p><p> (_sectors).</p><p> 2.3 Position Estimation with Panoramic Localization</p><p&
74、gt; The algorithm described in the previous section can be used to get a robust bearing estimate</p><p> together with a confidence value for a single trained spot. As we finally want to use this algorithm
75、</p><p> to obtain full localization we extended the approach to support multiple training spots. The</p><p> main idea is that the robot determines to which amount its current position resemb
76、les with the</p><p> previously learned spots and then uses interpolation to estimate its exact position. As we think</p><p> that this approach could also be useful for the RoboCup @ Home lea
77、gue (where robot localization</p><p> in complex environments like kitchens and living rooms is required) it could become possible that</p><p> we finally want to store a comprehensive panoram
78、a model library containing dozens of previously</p><p> trained spots (for an overview see [1]).</p><p> However, due to the computation time of the feature space conversion and panorama match
79、ing,</p><p> per frame only a single training spot and its corresponding panorama model can be selected.</p><p> Therefore, the robot cycles through the learned training spots one-by-one. Ever
80、y panorama model</p><p> is associated with a gradually changed confidence value representing a sliding average on the confidence</p><p> values we get from the per-image matching.</p>
81、<p> After training, the robot memorizes a given spot by storing the confidence values received from</p><p> the training spots. By comparing a new confidence value with its stored reference, it is ea
82、sy to</p><p> deduce whether the robot stands closer or farther from the imprinted target spot.</p><p> We assume that the imprinted target spot is located somewhere between the training spots
83、.</p><p> Then, to compute the final position estimate, we simply weight each training spot with its normalized</p><p> corresponding confidence value:</p><p> positionrobot =<
84、;/p><p><b> X</b></p><p><b> i</b></p><p><b> positioni</b></p><p> Pconfidencei</p><p> j confidencej</p><
85、;p><b> (6)</b></p><p> This should yield zero when the robot is assumed to stand at the target spot or a translation</p><p> estimate towards the robot’s position when the conf
86、idence values are not in balance anymore.</p><p> To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged field</p><p> in our robolab. The spots were locate
87、d along the axes approximately 1m away from the center.</p><p> As target spot, we simply chose the center of the field. The training itself was performed fully</p><p> autonomously by the Aib
88、o and took less than 10 minutes. After training was complete, the Aibo</p><p> walked back to the center of the field. We recorded the found position and kidnapped the robot to</p><p> an arbi
89、trary position around the field and let it walk back again.</p><p> Please be aware that our approach for multi-spot localization is at this moment rather primitive</p><p> and has to be only
90、understood as a proof-of-concept. In the end, the panoramic localization data</p><p> from vision should of course be processed by a more sophisticated localization algorithm, like a</p><p> K
91、alman or particle filter (last not least to incorporate movement data from the robot).</p><p><b> 3 Results</b></p><p> 3.1 Environments</p><p> We selected four diff
92、erent environments to test our algorithm under a variety of circumstances. The</p><p> first two experiments were conducted at home and in an office environment8 to measure performance</p><p>
93、 under real-world circumstances. The experiments were performed on a cloudy morning, sunny</p><p> afternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory.</p>
94、<p> Even more challenging, we took an Aibo outdoors (see [7]).</p><p> 3.2 Measured results</p><p> Figure 4(a) illustrates the results of a rotational test in a normal living room. As
95、 the error in the</p><p> rotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment of</p><p> a single sector9; moreover, the size of the confidence interva
96、l can be translated into maximal two</p><p> sectors, which corresponds to the maximal angular resolution of our approach.</p><p> 8XX office, DECIS lab, Delft</p><p> 9full circ
97、le of 3600 divided by 80 sectors</p><p> (a) Rotational test in natural environment (living</p><p> room, sunny afternoon)</p><p> (b) Translational test in natural environment (
98、child’s</p><p> room, late in the evening)</p><p> Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotational</p><p> experiment on the
99、left the robot is rotated over 90 degrees on the same spot, and every 5 degrees its</p><p> orientation is estimated. The robot is able to find its true orientation with an error estimate equal</p>&
100、lt;p> to one sector of 4.5 degrees. The translational test on the right is performed in a child’s room. The</p><p> robot is translated over a straight line of 1.5 meter, which covers the major part of
101、the free space</p><p> in this room. The robot is able to maintain a good estimate of its orientation; although the error</p><p> estimate increases away from the location where the appearance
102、 of the surroundings was learned.</p><p> Figure 4(b) shows the effects of a translational dislocation in a child’s room. The robot was</p><p> moved onto a straight line back and forth throug
103、h the room (via the trained spot somewhere in the</p><p> middle). The robot is able to estimate its orientation quite well on this line. The discrepancy with</p><p> the true orientation is b
104、etween +12.1 and -8.6 degrees, close to the walls. This is also reflected in</p><p> the computed confidence interval, which grows steadily when the robot is moved away from the</p><p> traine
105、d spot. The results are quite impressive for the relatively big movements in a small room and</p><p> the resulting significant perspective changes in that room.</p><p> Figure 5(a) also stems
106、 from a translational test (cloudy morning) which has been conducted in</p><p> an office environment. The free space in this office is much larger than at home. The robot was</p><p> moved al
107、ong a 14m long straight line to the left and right and its orientation was estimated. Note</p><p> the error estimate stays low at the right side of this plot. This is an artifact which nicely reflects</
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- Scene recognition for mine rescue robot localization based on vision.pdf
- Scene recognition for mine rescue robot localization based on vision.pdf
- office復(fù)習(xí)資料
- music, our lifelong companion
- home,sweet home
- 伴隨診斷-companion-diagnostic
- the latex companion, 2nd edition
- (劍橋指南) the cambridge companion to modern jewish philosophy
- axebot robot the mechanical design for an autonomous omnidirectional mobile robot
- gm turns to localization for better cadillac
- multiple_source_localization.pdf
- Robot.doc
- robot.pdf
- Terminology Translation and Management in Software Localization Projects-a CASE Study-ibs 550 Localization.pdf
- robot.pdf
- robot.pdf
- Robot.doc
- robot.pdf
- home—ajoyforever
- the idea of home
評論
0/150
提交評論