JCP1 - A Java based DIY robot! |
Home | Progress Blog | Hardware and Software | What can it do? | Random Ideas |
Progress Blog |
23. Sep 2008 Added support for loading jsgf files with rules that can be imported to other grammars. A grammar specifying numbers is the first I added and at the same time added a general calculation module where you can ask it to e.g. "Add 345 to 129" and other calculations. The same number grammar was used in the currency behaviour so that it can calculate any uttered number to the local currency. I have also added a moon phase grammar where you can ask it about the current moon phase, or next full/new moon. Finally a sun rise/set grammar was added so you can ask when the sun rises or sets today or tomorrow. |
22. Sep 2008 Finally got JCP1 up and running again, well at least the OS, drivers and some software is installed by now. Also did a quick test and got my software running there as well although with no devices active. I see that I have to rip out the annoying little cpu fan on this motherboard, its absolutely terribly noisy spinning at 4000 rpm! My hopes are either to find a bigger copper heatsink so that the thing can be passively cooled, or just mount a low speed 12cm case fan above it instead. The dvd drive is fairly noisy as well when it is spinning so I think the robot might pick up that noise as well when it is spinning at full speed. Hopefully its somewhat muffled when it is mounted correctly inside the robot. The battery lasted for approximately 2.5 hours, but I think the battery was not fully charged since it had been unused for quite some time. I guess it needs some rounds of recharging to get it back up again. Hopefully I should be able to run the robot for 3 hours on one charge. |
19. Sep 2008![]() |
10. Sep 2008 The last days I have been adding DVD and CD ripping behaviours. When you insert a DVD it will ask if it should make a backup of it on your servers. It uses AnyDVD for this. The only problem I have bumped into is the name of the movie. There is no description file on the DVDs indicating what its title is, and often the volume label of the disc is cryptic, and sometimes the label is just DVD! I really wanted the robot to look up information about the movie on imdb the moment you popped it in the drive so you could ask questions about it. I will probably have the robot ask you which movie it is if there isnt enough information on the volume label to figure it out. I also added Music CD backup so it can also store your music CDs on the server. I will further experiment with playback on a remote server so you can essentially use the robot as a remote cdplayer on wheels and have the sound come out of your stereo. :) In addition to this I have added remote server picture viewing and it can do image searches on google images and show them in a slideshow. I will add support for user pictures where the directory name in your photo library is used as the identifier when you ask it about showing pictures of certain things or people. For any of these remote server tasks I will add some grammars so that you can configure the behaviour of the robot in such a way that it will always e.g. play music on your livingroom server. At this point JCP1 only knows about one server but the general idea is that any remote tasks can be directed to whichever display you want it to use, and in the future have an automatic mode where it knows which room it is in and uses the display in that room. |
9. Sep 2008 Decided to stick with the D201GLY2 board I use now because the new Atom board from Intel consumes around the same amount of power because of its hungry chipset. Ordered the correct cables so I can get the D201GLY2 up and running, and also ordered a slim slot in DVD drive for DVD and Music ripping/playback. |
8. Sep 2008 I think I have found the best board for a pc based robot now. The Kontron KTUS15/mITX - 1.6 uses the new lowpower Z530 atom processor as well as the new US15W chipset, together they consume less than 5 watts! A complete board would probably use a bit more, and I am waiting for a reply from the company about real power usage when idle/load. But it does seem like this whole thing consumes less than 10 watts, which makes it absolutely brilliant for a robot. The benchmarks of the Z530 shows its about the same as the other Atom processors and should be more than enough to process both speech recognition and webcamera stream. |
8. Sep 2008 Been going back and forth trying to decide if I should get an Atom board, but it seems the D945GCLF from Intel consumes around the same amount of power compared to the D201GLY2, but performing worse on integer operations. The only advantage is more USB ports and the ability to plug the M3 Pico Psu directly in it (really silly placement of the power connector on the D201GLY2). They both require active cooling, the D201GLY2 on the CPU and the D945GCLF on the chipset (which consumes a lot of power). The new Dell Atom based mini laptop is the first to have 100% passive cooling so that one might be a good robotic platform also, but I think I will wait for the next generation of low power boards before I do any upgrade again, after all the D201GLY2 performs very well as it is now. With the new distributed server processes I can also use several other computers for processing vision as well, so CPU power on the robot is of less concern. But power usage is. My 10 Ah SLA battery would probably be dead within 3 hours with the current power consuptions, and I'd like the whole robot to at least run for 6 hours between recharges. Since its only a 10Ah battery I can only charge it at 800mAh on my charger which can take 10 hours to charge. Some day I might kit the robot with nimh or li-ion packs which will provide much longer runtime per kg. |
5. Sep 2008![]() |
4. Sep 2008 I have spent the last days also adding support for the DICTATION grammar keyword. With that I mean, I can add DICTATION anywhere to get any word which is then used as the tag in the result set. This enabled me to add a new InternetLookup behaviour with grammars for "Define XXX" and "What is a XXX". It will use answers.com and wikipedia to get definitions and descriptions of the word you uttered and say that out loud. It works very well and it was quite fun to learn definitions of word I didnt exactly know the meaning of. But of course the recognition can often pick up the wrong word since there are many similar sounding words. This is the problem with all non-grammar recognition. The only way it can be improved is by providing the system an extra clue about what you are asking about. For example you could say "What is the color blue" and I could limit the dictated word (blue) to colors so it doesnt think you said glue or flu or something alike. This requires a lot of work though to identify categorisation, but I could also use them as search words and only match the articles that also has the keyword (the word "color" is likely to be found more in the article about "blue" than "glue"). This requires it to look at all word guesses and look them up all. |
3. Sep 2008 Started coding again and it seems JMF will not work on Vista so I plan to switch to another Java API wrapper using DirectShow directly. Anyway, I have created the outline of the distributed server processes I have planned. When you fire up JCP1 it will broadcast an UDP "ping" on the subnet announcing itself. The robot runs an RMI registry that any server can add their RMI services to for the robot to use. When the robot is shut down, the server process go back to listening for robot "pings" again. My first service API and implementation offers the ability for the robot to display a fullscreen message, show an image (passed or loaded on server), start/stop music and start external programs. This is quite powerful because the robot can then start a remote RoboRealm server which it uses for vision processing whenever it needs it. It can also display information you ask it on a monitor/tv in front of the user. Each server will register its service object with a name indicating where it is located and if it has a screen (some can be headless). The robot can then use the display that is closes to its current location. An example, if I am sitting in my livingroom I can ask the robot to show me some pictures of my kids and the robot will turn on the tv, switch to media pc and run a full screen slideshow of pictures stored on the media pc or the robot. I can ask it to play some music by Sting and it will generate a playlist that is used in winamp (there is a nice commandline tool for operating winamp from another process). I will also add playback of movies although I need some good way of controlling the media player by commandline for that to work (or I could add a general media player based on a DirectShow wrapper). The idea is for the robot to use all available computing resources through this interface so that it can also distribute its processing requirements, especially when multiple vision algorithms are needed. Although that might require me to echo the image data between the servers since those are running on a gigabit ethernet which is very fast compared to the wireless connection the robot is using. |
27. Aug 2008![]() |
27. Aug 2008![]() |
27. Aug 2008 My main workstation has been upgraded with a brand new Core2Duo, fast memory and is running Vista in a nice Antec P182 case. I am trying to get the developers environment up and running there again, and so far I have gotten problems with JMF. I hope I can make it work, but I got a bluescreen when it was scanning for capture devices. In time I actually hoped to get rid of JMF because platform independence isnt that important and there is another option that uses DirectShow API natively. Other than that I saw that Cepstral has a few new voices in one of their web services, but unfortunately they arent for sale yet. If you are considering good sounding speech output on your robot, I'd recommend checking out Cepstral (btw I am not paid for saying this :). My new computer seems very powerful and could probably run many RoboRealm vision computations at the same time, so in time I will experiment with a genetic blob and feature detector to see if that could be used as information for the robots navigation module. This will be very complex though. The general idea is to identify blobs and features and make a "mental map" of blob adjacency, and after a while it could use this information to make an estimation about where it is in relation to other things. The trick is to identify which blobs are stationary and which move about. I guess I need to create some statistical data that is continuously updated to make the robot build a more solid map. As I said, this is very complex stuff and chances are I will just experiment with the idea. Autonomous navigation is one of the areas my robot is severely lacking. |
3. Apr 2008 I went to Biltema and bought some imitated leather and tried to use that as "skin" on the back of the robot as well as on the top and neck. I am not satisfied with the result, but it will have to do for now. I will possibly use alu-plates on the back as well, but stick with leather on the top and perhaps give the robot a leather cape! :) Sounds silly yes, but I think it might look good actually. |
2. Apr 2008![]() |
1. Apr 2008 I have now successfully integrated the USB-Uirt so that the robot can operate the TV through simple voice commands. Turn on/off, change channels, mute, etc. I added some of my tv channel names as grammars too and the robot will select the right channel on the tv. At the moment it doesnt seem to have a long range, but I think I just need to increase the number of repeats when it fires out a command. The next challenge is naturally to make the robot locate the TV and position itself correctly. This is hard since I have not yet made any good mapping and navigation algorithms. At first I will just try to make the robot face the tv based on the magnetic compass (which is a bit flaky atm). |
29. Apr 2008 Fixed the head moving algorithm so that it will track any object (COG) with 1 second head move updates. The move speed will be dynamically adjusted so that the head is in its intended position within 1 second. This ensured a smooth head movement when tracking an object (e.g. the users head or a tennis ball). The next challenge is to add a behaviour where if the head is held too long to a side it will rotate the wheels to center the head. |
28. Mar 2008 I totally forgot to blog that I have removed the old Via CN13000 board and is now using my Samsung Q1P as the robot brain. The 1 GHz Pentium M performs so much better than the 1.3 GHz Via!!! I have bumped up the fps of my vision code and there is still 50% CPU time left on average! This is good! Considering that I will most probably use a remote server for vision processing, I really dont need any more processing power. I am toying with the idea of taking the Q1P apart and find a way to mount the 7" touch screen on the robots chest. The only problem is to find a reliable way of extending the ribbon cable to the LCD. Other than that, the Q1P has everything it needs except it only has 2 USB ports! So I probably need to install a usb hub and power it from the 12 volt battery (regulated to 5 volts). |
26. Mar 2008 I just love Java! Some things are so easy to do here. In a couple of hours I have integrated Wii controller support. You can tell the robot "enable manual control" and it will ask you to click 1 and 2 on your Wii controller to connect it. After that you can use the analog stick on the Nunchuk to drive the robot around! WiiRemoteJ was the library I used and I also found a free bluetooth JSR implementation that works wonderfully. I also added some step based wheel control based on the arrow keys and buttons on the main controller. The plan is to add some sound effects playback when you press certain buttons. That way I can have fun with the kids driving it arond and making sounds. Although I prefer autonomous operation, its was so simple to implement that I just had to add this remote control. Its also nice to have an external control option for triggering certain modes and scripts from the Wii controller as well. |
24. Mar 2008 I added a simple "map the area" command and the robot will rotate 360 degrees and use its ultrasonic front sensor to make a sonar readout. I then draw the output on a sonar graph shown on the robot gui. I just wanted to try this out since the sonar will be important for the internal mapping routines that I will be working on next. |
20. Mar 2008 I've added a new RoboRealm detection that will detect and follow a tennis ball with its head. I experimented with faster tracking, issuing head movement 3 times a second, but I see that I need to enhance my servo module so that I can set position directly and not use the servo controller registers for moving. If I use those the head will get serveral move commands and frequently move past the goal point, the head whirring back and forth. By lowering the move rate to once every 2 seconds it follows the ball nicely. I also experimented with adjusting the move speed based on the distance so that I could be certain that the head reached its goal position before I sent a new position. I also had to add semaphores so that only one task could use the head servos at a time to avoid conflicting behaviours. I want to further enhance this to rotate the whole robot when the head is turned a lot to the left or right. This will probably be triggered when the head is turned beyond a certain position. And within a threshold I will also add a timer which would turn the whole robot after a while with its head turned to a side. The plan is to make it an inherent feature of how the robot "looks" at things so that all behaviours dont have to worry about this problem. |
14. Mar 2008 No, the date is not wrong. A year has actually passed between these blog entries! :) The robot has been sitting on my desk for a long time now. I lost motivation when I faced performance issues on the Via board (hint: get something better). But I have now picked up the project again. My latest discovery is a great vision tool called RoboRealm. This is a really nice program that can act as a vision server which you can send images to for analysis. I immediately wanted to try to realise a fun little thing, and that was to add a grammar where I can ask it "What shape is this" and it will locate the sheet of paper I am holding up with a simple symbol on it. It now recognizes a star, rectangle, square, circle, triangle and circle. And it works brilliantly! I also enhanced my software so that it will connect to a remote RoboRealm server and load robo scripts that are required for a certain vision task. I have a list of things I want to try next. I also fixed a couple of timing issues with the AskQuestionBehaviour so it isnt quite so annoying. :) And I changed the ChatBehaviour so that it has to see a face for it to react to hello's, hi's, etc. This was needed because the robot thought it heard someone say hi all the time. Chances are that I will add grammar timers so that a certain grammar can only be repeated within a certain time span also. Another solution would be for the greeting command to make it turn around and look for a face before it greets back. Naturally this should only happen if the robot isnt doing something important (is in idle/creative mode). |
12. Mar 2007 Received the USB-UIRT that I am going to use on the robot so that it can turn on and off the TV, receiver and other IR devices. I will also try using an ordinary remote control (that I am not using otherwise) to send commands to the robot. There is already a nice Java API for the USB-UIRT that I can use for this. |
12. Mar 2007 Realized that it was much easier to just attach the ultrasonic sensor to the front panel holes instead of having them mounted on the robot. The Robot-Electronics SRF10 sensor has some nice rubber rings for mounting which was very easy to fasten and it made the whole thing look much better. I also shaped and cut the alu panel on the right side of the robot. I have fastened them using velcro which works better than i had imagined. I used some of the leftover aluminium to make some simple brackets which I fastened to the base using double sided tape and then used velcro on top of that. I also saw that I should try to use as little velcro as possible since it sticks so well that it almost became hard to get the panel off after! :) Well, 3 more alu panels and the robot is looking much better. I have to cut some holes for the power buttons (main and pc) as well as 2 USB ports I'd like to have available on the outside. On the top I plan to add some cloth that is mounted up as the "neck" and it has to be loose enough so that the robot can pan and tilt the head. |
11. Mar 2007 Started drilling some holes in the aluminium body front panel for the ultrasonic sensors. The plan was for them to stick out a bit but I think i will ditch that idea and just move them some millimeters in and just have the holes in the body expose the ultrasonic trancedusers. Next I want to get some alu bracets I can attach to the base and then use velcro to fasten the alu body. I think that should hold it in place good enough. The velcro should also minimize any rattling as well. I also fastened the head bracket and have found a way to attach the wires undearneath so that they stick out a bit in the back and have enough flex to move the head without giving the servos too much resistance. I fixed the head behaviour code so that if it doesnt see a face it will move its head in a random fashion with a 20% probability of facing forward. I will probably work more on this and make the head movement more natural. One feature I want to add is motion detection that will trigger head movement, so that when its standing still it will do a motion check and move its head to look at the highest area of the image that had motion. Chances are that it will find a head it can look at after this movement. |
10. Mar 2007 Its been a week since I have worked on the robot now. Just wanted a break and do some other stuff. I have worked more on the question Behaviour and found a pattern that can enable and disable the behaviours based on questions and answers as well as a timeout so that it will revert to the standard grammar set when it is times out. I also did some fixes to the free roaming behaviour so that it moves a bit faster, has more responses and is generally a bit better. I have realised that the wheels are just too small, especially the caster wheel will all to often get stuck on the carpet edges. Since I only have one ultrasonic sensor in the front its also not very intelligent in choosing a path for the moment. I have still not added any map-navigation but that will come soon too, hopefully something that can be constructed realtime as it is exploring its area. Chaces are that I need some reference points it can recalibrate itself in some way, preferably through its vision. |
3. Mar 2007 This weekend I reworked the Behaviour classes a bit so that they have 4 states, active, disactive, paused, running. This was to enable context behaviours and being able to differ between just a paused behaviour and a completely deactivated one. Furthermore I added categorisation of behaviours so that e.g CRITICAL behaviours are always on, while STANDARD ones can be paused. The Consciousness processor that managed these behaviours has methods for pausing and resuming these. This is the basis I needed to enable conversational behaviours so that the robot disables a number of behaviours while it is having a conversation with you. This also enables temporary behaviours that e.g. can simplify speech input by accepting a subgrammar that uses shorter commands (e.g. "forward" instead of "move wheels forward"). I have realised that my code needs some semaphores for the sound input/output so that the threads dont trip on eachothers feet as the recognizer is disabled and enabled. There has been cases where the hearing processor is active when it should not and where it has not been turned on after some voice output. These cases has been further complicated with the enabling and disabling of grammars. |
2. Mar 2007 Ok, I have now created a new subdomain at http://robot.lonningdal.net (where you are reading this). Hoping that these pages will provide som interest to people who are also building general purpose PC robots. Some day I plan to add some blueprints and bill of materials on my design so that you can create a JCP1 too if you want. :) - I am not sure when or if I will be ready to give out the code freely. I might even consider writing a book about it and let everyone see the solution there. |
1. Mar 2007![]() |
1. Mar 2007![]() |
1. Mar 2007![]() |
28. Feb 2007![]() |
27. Feb 2007 I started work on what I call behaviour trees which is a system that will dynamically alter the currently active behaviours. So if a certain behaviour is triggered it can enable some new behaviours and disable others. This is the foundation I need to be able to create conversations, and to begin with I did a simple behaviour that if it has seen a face for 10 seconds it will ask if it can ask you a question. The new behaviour now is that you can answer it yes or no (and variations of this). I am also going to add timers to behaviours so that certain behaviours are only active for a certain amount of time. For example the quesion behaviour will only be active for 10 seconds until it goes back to normal operation (or previous state). Its becoming an interesting state driven engine now. I also added global timer triggers to the Counsciousness module so that I have one single thread firing events to different modules instead of each behaviour module having to run its own thread. |
26. Feb 2007 Did some modifications to the roaming behaviour so its more robust and navigates a bit more intelligently. I realised that it has problems getting over the carpet corners if it approaches at an angle. This is because the caster wheel is flat and made of plastic. I think I want to swap this with an oval rubber caster instead to make it better. The caster rotation hinge is terrible also with way too much friction - I had to use some CRC on it to get it to rotate the way it should. |
25. Feb 2007 Added code for long term memory, a database system where you can store any kind of information on a tag and add relations between tags. I then used this to store the creation time of the JAR file containing the robot code so that it will look at this every time it starts up and if it has been updated the robot will utter words of happiness about its software update. Its quite fun! :) |
24. Feb 2007 Havent done much this weekend except that I added a behaviour where you can ask it about currencies. It will download the latest currency rates off the net automatically. Later I am going to add the ability to convert currencies as well. Its quite useful since I often purchase things from USA and would like to know how the dollar is compared to the Norwegian Crown. I also added a roaming behaviour where it will just go around randomly exploring the room. |
20. Feb 2007 Some minor software modifications as well as the ability for the robot to play back wav files. I use this for it to play back sound effects so I can make it laugh, snore or make animal sounds. Its a fun feature which I already added some behaviour code to interact with it. My daughter will love it for sure! :) |
19. Feb 2007 Since the robot now has access to the internet, I quickly made a behaviour where you can ask it about the current weather as well as a forecast for the next day. This data is retreived from a Yahoo Weather RSS feed, parsed and uttered by the robot. It works brilliantly and I definitely plan to add more useful features that uses the internet. Among these a pop3 mail checker, an rss news reader and a currency converter. In time I might explore the possibility for the robot to look up articles on wikipedia which it can read for you. This requires me to use a dictation mode for the words to look up and will probably not be very precise. Still its an interesting concept I want to explore. |
19. Feb 2007 I finally bought a wireless card that I found on a special sale. Its a 54 Mbit/s PCI wireless card, which is more than enough for my planned use. Installing a new version of my software on it is a breeze now since the robots drive is shared on the net I can now one click pack the JAR file and deploy it immediately. I also installed VNC and can remotely control and see the user interface of the robot. Fortunately it doesnt seem to eat much CPU time either. Connecting with VNC I was also able to figure out what happened when you boot the Via Epia board in headless mode. It was always reverting back to 640x480 resolution, which again caused my GUI layout manager to collapse in a way that made text fields so small that they got stuck in some layout repaints that quickly gobbled up 100% of the CPU time. A quick fix to solve this resulted in a much smoother running robot with average CPU load at around 50% with face detection being run once every 3 seconds. The camera stream is still what consumes the most CPU time, and I really want to explore the possibilities here to lower the CPU use from this part of the software. I really only need one picture per second for the robot to work satisfactory so its a shame my code has to retrieve a stream of image data that it has to ditch. For my Logitech camera, the framerate adjustment doensnt work either. |
18. Feb 2007![]() |
18. Feb 2007![]() |
13. Feb 2007 I have done a number of software changes and additions. Among these, the robot will now pause the speech recognizer whenever it is talking to avoid having the mic picking up its own words as commands. This worked fine and the robot became a bit more "quiet". :) I also added some code to move the heads tilt/pan servos depending on where the largest face in its vision is. This worked somewhat, but I have realized that the Creative Live Vista web camera I have is terrible in low light conditions, so I definitely have to change this. Chances are that I will swap it with my desktop computers Logitech Pro 4000 camera which seem to work much better in lamp-light. That camera also has an integrated mic which seems to work ok for voice recognition also. I also added some more conversational commands like you can ask it what the time is and how it is feeling. If its battery is very low it will say that it doesnt feel too well, that it needs more power! :) A grammar for asking about cpu load and sensor readouts was also added. Its nice to have these there because I dont have a screen on this and all debug information has to come through its voice now. I desperately need a wireless card in this now for debugging as well as a simpler way to install new software. The microphone still picks up servo sounds of course, and I am thinking about trying to time the head movements to periods when the robot is talking (hence not listening), but that might be too limiting. I might consider trying to build the head in such a way that it dampens out most of the servo motor sounds. The next thing I want to do with the robot now is to interface the digital compass module. |
11. Feb 2007 This weekend I assembled it all except the compass module. For the first time I also ran the Mini-ITX on battery power which worked flawlessly. I have updated my code to support it all and reworked my behaviour code so that its more object oriented. The robot can now run by itself using simple commands. I had to deactivate the speech recognition everytime the robot spoke or it would very easily hear its own words as commands! :) For the moment the robot is very talkative and has some random sentences as well as some famous quotes. It will also talk about its battery state once in a while, but more frequently when the battery is getting low. Furthermore it will react to its ultrasonic sensor when something is closer than 20 cm. If its moving forward it will stop the motors, otherwise it will just say that something is very close to it. The robot will also greet when it sees a face after a period of not seeing a face. The only thing the robot is missing now is a wireless network adapter. Its very tedious to get the latest version of my code onto it now - I have to transfer it using a usb stick. Also the Creative webcamera drivers seems to have a bug since I am unable to disconnect from it without the whole Java VM hanging. This ruins my auto-update plans for the software also. I was hoping to add a function that checks for an updated version of the sw and auto-downloads it. The general idea is to have a seemless software upgrade that dont require me to have a screen/keyboard/mouse connected to it. |
9. Feb 2007 Checked out the new Creative Live! Cam Voice web camera in a store. It has an integrated array mic which should make it better for speech recognition and probably a bit more directional. It looks pretty cool and should probably work fine except that the Creative Live Vista cam I am using now seems to hang when you close the data stream. Chances are that I will use a webcamera with an integrated mic to reduce the number of devices and avoid having a separate microphone. |
8. Feb 2007 The Jameco package contained all I need to wire everything up now except that I forgot to order the power plug thingy to connect to the picoPSU. I think I managed to find something like that on Clas Ohlsson today and will try to use that. I also picked up a soldering iron kit so that I can add the connector to the sensor and other soldering needs which will no doubt spring up. |
8. Feb 2007 The USB speakers have decent sound and came with some drives I also tried. They had some environmental sound effects features which I thought I could use to make the robot sound a bit cool, but I soon realised that this was all implemented in the driver software and was not a feature of the speakers. This basically meant that the CPU load jumped to 100% everytime it spoke and even made the sound break in a stutter. This was swiftly uninstalled! :) |
8. Feb 2007 Eager to get the OpenCV face recognition up and running I made a behaviour module that looks for a face every 10 seconds and will say a greeting or something when it does. Its very fun and worked immediately. I will add some more parameters to this so that it only triggers on faces that are close to it and only greets if it hasnt seen a face for some time. This is all very simple to do in software. |
8. Feb 2007 The package from Jameco arrived, and I bought some small Trust USB speakers which work brilliantly for the robot. The funny thing is that they look like eyes so the immediate thought was to mount them on the head as if they were eyes, although there would be a camera there too over or under it. :) The servo brackets from Lynxmotion and their Pan/Tilt kit is excellent and I mounted it all on the top, and even managed to fit the webcamera on the tilt bracket very easily. In time I will of course dissasemble this and create frame for the head and mount it all there instead. |
7. Feb 2007 After much trial and error I managed to get my OpenCV Java-JNI C code to compile in Visual Studio Express and soon after I was able to create my Java integration to OpenCV for face recognition. My API has methods for sending down a 24 bit BGR style image which is copied into a temporary OpenCV image kept in the native implementation. The next improvement will be to allow it to maintain several cascades so that you can have it recognize different types of objects. I just have to learn how to create new training sets - which might be more difficult than I hoped. |
1. Feb 2007![]() |
1. Feb 2007![]() |
19. Jan 2007 First version of the web pages up and running. Hoping to move it all to robot.lonningdal.net some day soon. |
18. Jan 2007 Some software testing shows that the Via CN13000g doesnt have enough horsepower to run MS Speech Recognition, easily eats up all CPU time! :( Need to distribute the processing through wireless networking to make it work like I want. Also made my first JNI bridge between a VC++ compiled DLL and Java. Will work further on this to interface OpenCV with Java. |
17. Jan 2007 Via system up and running with Windows XP and all necessary drivers and APIs installed. All works fine. Worked some on the robot software to make a GUI frontend to visualise information about each module (vision, hearing, speech). |
16. Jan 2007 Received package with Via Epia CN13000g mini-itx board, memory, webcamera and a case. Looks very nice and small! But World of Warcraft Burning Crusade was also purchased so I just had to play that a little! :) |
15. jan 2007 Got the PicoPSU in the mail. Its really really small and will be great for my robot. |
Summer 2005 Made the first sketches and 3D Studio models and renderings. Also prototyped CloudGarden JSAPI. Researched the ned and found out about OAP. I had a lot of ideas and wrote down a lot, but didnt take the time to start on the project. It would be some time until I did... |