Portable Actuation System Benchtop Test

0. before we start

Hey there! Good to see someone who also interested in assistive robotic! In this project, I am presenting a portable actuation system running in just a laptop with the capability of real-time walking assistance. While developing the portable actuation system I look into several aspects such as lightweight, unified I/O, sufficient actuation force and robustness. I more than happy to sharing you the current progress of my work, any advice is encouraged also welcoming. I'll take you through the key components of this project and sharing my thought with you. Before we start, please take a couple of minutes to check the video above, which makes it easier to deliver my ideas to you.

1. overview

As shown in the video, the portable actuation system is dedicated to assisting plantarflexion during walking. While the pressure is sensed on the sesamoid by a pressure switch, the high-level controller generates a torque trajectory based on the interpretation of the soleus muscle activation during walking. The high-level controller is running on a laptop through the EtherCAT protocol assures a real-time response. A MAXON EPOS4 is used as the low-level controller taking care of sensor information and motor control. The motor is MAXON EC-90 flat which commonly used in the exoskeleton and robotic field. It's exciting to showcase the current progress which all system is running well, however, some missing parts are still needed to be finished, such as the battery, motor housing, and actuator design.

2. ethercat communication

What's EtherCAT?
Imagine that your real-time target machine is no longer the chunky Speedgoat but your slim laptop, this is what we can do with EtherCAT and I found it very useful for robotic! EtherCAT (Ethernet for Control Automation Technology), an Ethernet-based fieldbus system, invented by Beckhoff Automation. It is optimized for process data and is transported directly within the standard IEEE 802.3 Ethernet frame. So, what're the benefits we get from this protocol? Here are the answers.

1. Real-time: EtherCAT assures both controller and machine are synched by using a distributed clock mechanism. The master device (in my case, a laptop) is always checking with the slave device (EPOS4 controller) to ensure the frames are not missed or delayed.

2. Simulink Integration: One of the biggest benefits of EtherCAT is the capability of integrating with Simulink. When I was building the actuation system, my idea is to build an experimental platform for all the lab members to examine their algorithms, which means my system should able to run the Simulink model as image 1 shows. Since Beckhoff choose to work with Visual Studio, the whole EtherCAT structure is running in Visual Studio which makes it possible to put the Simulink model into EtherCAT with the TE1400 module. This makes my system accessible and user-friendly to test control algorithms.

simulink+ethercat (Image 1 : EtherCAT integrate with Simulink in Visual Sutdio)

2. High-level controller

In the high-level controller, I've designed time-based controller which generates a torque trajectory for walking assistance during the plantarflexion phase. The trajectory is the interpretation of the soleus muscle activation during human walking. The timing of giving assistance is detected by a simple pressure switch, which activated when the pressure on the sesamoid is above a certain threshold. The pressure on sesamoid is a good indicator of transforming into the plantarflexion phase, due to sesamoid is the main support in the plantarflexion phase. Image 2. shows the performance of how the controller works. And Image 3. shows the measured torque and ankle angle during walking. Notice that the torque on Image 2. are estimated, the one on Image 3. is an actual measurement. Instead of having a torque sensor to do closed-loop control, here the low-level controller uses motor speed and current to estimate the torque. The measurement shows a promising result of this approach, although there is still some undesired sudden force at the beginning of developing torque, as I boxed in the purple dash line. This can be due to the loosen cable, which may be able to solved by adding a preliminary tension by commanding a very low torque. Although improvement of the control algorithm is still needed, this is a satisfying result in the initial stage.

system performance (Image 2 : Controller Performance, Yellow - motor torque, Orange - switch state, Blue - motor speed (rpm), green - torque command)

torque vs angle (Image 3 : Measured data, Blue - measured torque, Orange - ankle angle)

(to be continue...)

Video/Image: An-Chi He

VR/AR Platforms Pros vs Cons

First week courses demonstrated various VR/AR technologies, I'll express my opinion and experiences as follwing.


An immersive virtual reality room which doesn't need auxiliary devices such as glasses. For me this feels like the first time walk into movie theater when I was a child, the experience is stunning, the possibilities of visualising data or concept is limitless, furthermore it's the most comfortable and awe-inspiring VR experience I've ever had. The only cons I can come up with is its cost and it requires specific content which only ment to be for the CAVE.


The positive experience of VIVE is immersive and splendid, the locating ability is almost impeccable, high frame rate gives me a seamless experience when I look around. As a video game enthusiast I've tried VIVE few times before this course, the video game performance is extraordinary, furthermore user can display his/her PC desktop in a 360 degree space. I believe the possibility of VIVE doesn't just stop at video game, remote control could be a great usage if following con can be enhanced. Negative side will be eye fatigue due to long time usage, I think this phenomenon is even more serious for people who wear glasses, since human eyes adjust the focus to have a clear view on certain area, however, this isn't working in VR device such as VIVE, this makes eye muscle contracting all the time which makes eye fatigue, eye strain, or even headache.

Microsoft HoloLens

The HoloLens shows a totally different aspect comparing to VIVE, the positive side will be its versatility since it augmented the reality that we can observe, mixed reality has different usages that comes up in my mind after I've tried it! Such as construction, worker will have better idea of what the designer want, outdoor exploring, user will have more information such as what terrian they are facing, education, and any type of augmentation, since my master thesis is integrate enviromental information into an exosuit so the HoloLens is very inspiring for me, for conclusion, more information can be observed due to AR. However, HoloLens isn't very friendly for people who wears glasses like me.

Smartphone AR/VR

I was surprised by the performance and capability of this demo, it is hard to image our phone can achieve such an immersive, interesting experience. The pros are good C/P, it is the easiest way to deliver VR/AR to users that usually don't have access to above devices. The cons is I felt that the field of view is narrow and low resolution, but in other word, you get what you paid for.

AR with Unity

The image on the above presented a golem named Messy in it's natural habitate, the augmented reality (AR) application is built by using Unity and Vuforia engine. Unity, a powerful cross-platform game engine, enforced with Vuforia, an augmented reality software development kit. Unity and Vuforia makes implmenting AR application very friendly, thanks to the friendly user interface and well-developed community of Unity, great associating connection that Vuforia has, both of them contributed to this exceptional experience of building an AR application. I was able to find various resources such as models and tutorial on the internet to accomplish this assignment.

When it comes to other AR applications with Mobile phone or glasses, it's not that hard to see in our life such as shopping, Amazon has AR display for furniture or 3C product to let you see how it looks in your house. Other application such as education, which delivers intuitive and more information. Also Google map provides a real-sense navagation by using mobile phone AR. The target image basically serves for feature matching, thus, by using image classification method such as CNN could have the same function, makes the AR more versatile.

During the process I was failed to import the predefined database which contains the astronaut image so I've to manually import it by using the Vuforia website, however, this makes me learned how to manage the database and have the chance to use many different resources, which is helpful for future project. It is amazing that be able to viewing the AR with portable device which gives the user more information which also deliver the idea more specifically such as construction site communication. The most impressive part is still how Unity makes it very easy to learn and use this technology. Before the class I've went through the tutorial on Youtube which makes it easier for me.

Special thanks to Youtuber mayank.technical for the golem 3D model and tutoring, other models are from Unity assest store for free.
Trees & Grass: Nature Starter Kit 2 by SHAPES.
Rocks: Rock and Boulders 2 by Manufactura K4

Image: An-Chi He

Google Translate AR

As many incredible AR applications, which augmented the information that user can receive, Google brought the AR technology into translation, which I consider is one of the smartest move since when we are using translator, there could be a huge chance that we are traveling abroad. This technology was called “Word lens” at the beginning and the translation quality isn’t so good, by combine with Google translate now it can recognize words and translate with good accuracy, however, as many automatic application that fulfilled by machine, google translate still can’t translate correctly under some condition, but it’s fairly to say, it’s enough to deliver a correct notion about what the user want to know. Now, I want you to imagine yourself as a tourist who travels to China, see how google translator works for you and if you are able to understand the content.

Above is a notice posted by a supermarket, it is bilingual so we can have a nice understanding of how well the translator did the job, as we can see the translated content is far from perfect, however, these information are understandable, it is already a useful function for people who travel abroad. I’ve also noticed that sometimes the translated content could be different even with the same sign, it can be due to the word identification function has some interfere under some situation.

Above image is a price list posted outside a salon with Chinese characters. What surprised me is that the translator is able to capture the font on the top section of the image, although it doesn’t work very well as we can see different fonts are also presented during the lower part, it's fair to say if we see a poster in other language, yes we may be able to understand the meaning by using this technology, it is also impressive that google is trying to improve it by different aspects, even on fonts.

This image also shows two different languages, however, the app recognized the English content and cover it with the same English content, sometimes it just seems hard to understand why they are doing so, but it could be useful for some users who can’t recognize certain fonts, such as script or hand writing, which may be hard to understand by foreigner.

All the pictures are took in Chinatown, we now have a brief understanding of Google translate's ability, yes, it is far from perfect, however, it does help the user on understand different languages, since human are able to understand the meaning by just key words. It seems technology doesn't just brought money to the industry, but that also is the reason why our life is getting easier and incredible like never before.

Image: An-Chi He

AR Skyview

(Image 1: colorful sunset from my hometown Taiwan)

Human have started to observe the sky and give it different meanings across different cultures, according to wiki "The oldest accurately dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (c. 1531–1155 BC)." Human were already fascinated by the light dots in the sky and wondering what it was. Technologies nowadays gives us the tool to observe the stars easily, each years we get better and better images of planets but still, we have very scarce knowledge of the universe. I felt a sense of romantic whenever I went to Chicago planetarium, I was so touched by the movies that shows in there, it makes me realize how small we are in the universe, yet, lonely.

The AR app we are testing today is called PuniverseX it uses your phone location to determine the sky your are currently looking at, by calculating the pose it gives you different view of the field. Beside this, I am very surprise how many information it gives, it's a very completed app comparing the similar app I've tried 6 years ago.

(Above image: AR view from PuniverseX. Below image: details from app)

Above image I took it from the app real-time view, it even label the comet in the sky, you can tap the label on the above of the screen to have more information, including a short story and a link to wiki, such as above image shown. We can see in the image it also tells where you are and I found the grid feature is useful for me if I want more information by using another observe method such as website.

(Above anbd below image: Tonight's sky feature)

Above images shows two functions contained in the app, first of all is Tonight's Sky, it tells you the location information of the earth and Moon phase. Let's look down to following image it tells the user what the moon looks like for the entire month, I think it maybe useful for people who sail occasionally, it doesn’t require expensive equipment to let you locate yourself and the approximate date.

(Above image: Tonight's sky feature)

Above image is also from "Tonight's Sky" feature in the app, it tells you the rise and set time of different star, it is very useful for people who does star photograph.

About extra function I can think about is that, if the app can read us about the information and stories the stars that we've found in the sky, wouldn't it be very cool? But the problem comes down to which part will be read, although it does link you to wiki but the developer still need to manually decide which part of the information need to be read. The stories I read from the internet saying how people from ancient time giving stories to the stars are very interesting, imagine that you are lying on the grass field and someone is reading you the stories of the stars, isn't it very romantic?

Image: An-Chi He

Hey that's my chair

Have you ever wonder how the furniture will looks like inside your house? Or even on the middle of the road?! In this episode I am testing AR apps for placing furniture, but we are not just testing it in the room, but some uncanny places like side walks or in the middle of the road that some places you don’t even imaged. I will test these three apps including Wayfair, Housecraft, Google AR playground.

(↑Image 1)

Firstly we test Wayfair, as the image 1 shows I placed a big desk in the middle of the lab as image 1 shows, the app works pretty well in well lighted spacious environment such as the lab I was at, it was easy for the app to locate the floor and place the item that I choose, I also tested smaller item to test out the scaling, it does works well. However, this app is not good to locate the floor when the environment when it is not well lighted or has many obstacles, it takes time to recognize the floor or it just doesn’t work for most of the time. This brings a crucial issue if you want to place some new item into your room or like the restaurant I am sitting in, this will discourage me keep shopping on Wayfair. So, the first point I will like to address is the app should have good ability to locate the floor or any desired surface, overall, I don't like this one.

(↑Image 2)

(↑Image 3)

Secondary, I tested an app called Housecraft, the locating speed is phenomenal, way faster than Wayfair, I put an 60 inches 4K TV in the middle of road and lay a sofa there as image 2, I was able to work with it even on the sidewalk in the night, or in a dim lighting restaurant. As in image 3 we can see that a welcome mat is on the restaurant table! The owner may be mad at me if I do put the mat on there.

(↑Image 4)

(↑Image 5)

I was surprised that Housecraft just took around 2 second to locate the table surface, the scale is also correct as the dimensions. It also shows a progress bar around the window frame indicating if the surface is well recognized as image 4, when it’s done it shows a pretty cool hexagon animation overlay on the surface as image 5 shows.

(↑Image 6)

(↑Image 7)

It also has an UI containing three functions like image 6, 1. Move the placed objects, 2. Clear the object, 3. Recalibrate. Obviously, the app developer team had worked on the potential issue that users may face such as the object has wrong scale after some camera rotation, the calibration function can perfectly solve this issue. Overall, this is one of the best app among what I’ve seen before, the experience is wholesome and well developed not like some app such as the Wayfair one which feels cringe to use. The only cons is there aren’t many items in the app, approximately 50 and it’s not real item that you can purchase, I believe they may be waiting to be purchased by some company or so. Overall, you got nice user interface and user experience, interesting stuff like tornado in image N, but the items aren’t that much. They have something fun in the app as well, like I put a tornado in my room as image 7!

(↑Image 8 and 9)

Third app I tested is not related to furniture but I just feel like fun to test, it’s the AR playground in pixel’s built in camera app. It has Pokemon and you can let them fight!! How cool is that? As following image you see that Charizard and Pikachu they are drunk and fighting on the road, I keep yelling at them saying car is coming they just don’t listen. Eventually Charizard turn to me and spit fire toward me! So I just take a few snaps like image 8 and 9 and bounced left them there, not after long I heard ambulance and never see them again, I do hope they are doing well, not matter where they are.

Conclusion, I had a lots of fun in this episode, I’ll summary the key of a good AR app first come with speed, user shouldn’t take long to scan the desired surface, otherwise user may not even use the app, second is a good user interface, if I need to go exit the AR interface to remove the objects then it’s not a good app, the user interface should allow user to edit the item on sight, third is surface detection, different from the first point which focus on speed, this one look into how well the AR app can assign the surface even it’s not well lighted or the surface is small, since it’s the typically environment users will be at home while they are using this kind of app!

Image: An-Chi He

Classification Eyewear

(Image from: Sumit Saha, Medium)

As GPU computation getting powerful every day, having a real-time classification neural network running on our portable device such as eyewear could come soon. But do we really need it running all the around us? Or it will be overwhelming for the user? The answer is not just yes or no, I will like to discuss this topic with these two points, 1. How crucial are the displayed information is? 2. Are the device smart enough to display the information based on user’s curriculum?

Imagine that you are watching a movie, and your eyewear keep telling you a lot of information such as identifying cars and people subject, that is just annoying! So for this scenario, we are easily connecting to the above two point, 1. The information is not crucial, 2. The system should be able to tell that I’m enjoying a movie. Based on this, the system should be able to determine what the curriculum the user is current on, such as driving, passing the street, or just having fun and chilling in the park.

So back to the first point, the device should be very clever to switch between different curriculum to decide what to display, such as while the user is driving, the crucial information is how fast the speed is, highlight the emerging car from different lanes, give me a wider vision while I’m switching lanes, such as showing the vision of my emerging lane, also highlighting any unusually fast moving objects, just any information that’s not distracting but important to make the driver safe! I’ll say this will be very helpful for cars that has many blind spots, such as big truck or bus.

To the second point, the device should be smart enough to tell that if the user is driving, watching movie, crossing street or just chilling. So the device doesn’t only doing classification for the objects, but also user’s activities, it can be done by if the intersection is present or user is facing the road, then the system can tell how many percentage that the user is passing through the road, based on this the eyewear can display any vehicle that’s in user’s peripheral, and highlight the incoming vehicle. Not only this, the device can connect to the internet, so the system of near by drives can also know that there are pedestrian crossing the road to enhance the safety from both sides.

Continuing from second point, the device should have internet connection, so it will be easier to detect what’s user’s current activity, we can use GPS signal to detect if the user is at a park or a movie theater, if the user is in shopping mall then it will be convenience to display prices from different retail source, furthermore, not only GPS signal, indoor places like museum, shopping mall, indoor mountain climbing gym can broad cast signal to tell the device where the user is, to display different information, such as a tutorial for museum, a potential climbing trail for mountain climbing gym.

Then how does the system define if the information is important enough to display? The answer for me is let the people to decide, the eyewear should display only essential information that predefined by developer team, such as highlighting fast incoming car while driving, but leave a huge controllability and choice to let user decide whether to display or not, by collecting the data of what every users configure for different places, the system can give a suggestion saying that most of the users active some certain display functions during current surrounding./p>

Back to the main question, will it be annoying if the classification algorithm running? I’ll say no, because by the time we are able to have a eyewear that can run neural networks, the eyewear will also be smart enough to be not annoying as well :)!

Image: An-Chi He

Project 001


Imagine that one day you walk into a book store, a small Totoro standing on the book and telling you the story about the book, showing you 3D movie trailers for letting you know how cool this book is, this is how our project works! We’ve created an AR application as a short trailer that leading the reader to get a good understanding of an incredible adventure they can have in the book. We hope this trailer inspires the imagination readers and kids who just want to see cool stuff! We’ve created several animations that’s in the famous movies that the author Mr. Hayao Miyazaki directed, including Castle in the Sky - 1986, the famous Totoro – 1988, Nausicaä of the Valley of the Wind – 1984, as you turn on the VR app, you will see all the characters flying and moving around the castle in the Castle in the Sky, it happens all upon the book cover. Also featured with the theme of Castle in the Sky as background music. While you turn the VR button on, you will hear a short narrative of this book which hopefully will brings you some inspiration and the desire of reading this book. Now, come join us, let’s explore the scenes behind this app.

One can easily imagine that how time-consuming building all the 3D model for a book can be, it could be not less than building it for movie trailer. Thanks for the sharing community of Google sketch and Unity, all our models come from free internet resources as below. It is incredible that some models are detailed and delicate. For example, in the Tiger Moth airship there is even structures inside the ship and several small guy inside there. The sky castle is also well-built, we’ve added a tree in the model to make it look more like the one in movie. The two model we’ve built by ourselves are relatively simple, we’ve recreated the iconic bus stop sign in the Totoro movie, and the iconic broom from Kiki’S delivery service, one interesting thing is that in Japanese and Traditional Chinese, the name of the movie is actually called black cat delivery service and there is actually a delivery company named by that in both Taiwan and Japan.


Video Demo


Models & Sounds

Model Overview 01: Here is the overview of models that stand on the book cover.

Model Overview 02: Models from a far sight, including flying units.

Model name: Sky Castle. / Animation: Castle in the Sky. / Source link

Model name: Robot Soldier. / Animation: Castle in the Sky. / Source link

Model name: Tiger Moth. / Animation: Castle in the Sky. / Source link

Model name: Mehve. / Animation: Nausicaä of the Valley of the Wind. / Source link

Model name: Santa flies on Mehve. / Animation: Nausicaä of the Valley of the Wind. / Source link

Model name: King of Insects Oumu. / Animation: Nausicaä of the Valley of the Wind. / Source link

Model name: Totoro. / Animation: My Neighbor Totoro. / Source link

Model name: Susuwatari . / Animation: My Neighbor Totoro. / Source link

Model name: Fantasy Forest Environment. / From: Unity Asset Store. / Source link

Model name: Rock and Boulders 2. / From: Unity Asset Store. / Source link

Model name: Broom . / From: Self made. / Source link

Model name: Stop sign. / From: Self made. / Source link

Theme Music: Laputa: Castle In the Sky Suite / Animation: Castle in the Sky. / Source link

Sound effect: .hack//g.u. / Animation: .hack//g.u. / Source link


It will be wonderful if books have their own trailer and can be viewed before purchase, a 3D book trailer! How cool is that! However, here is several requirements need to be satisfied to make this work. Firstly budget, publisher should have nice budget to build models, trailer, AR app and 3D scenes, not all the book can advertise themselves like that. Second is internet speed, 5G will be required to download such a huge 3D trailer, and all the publisher should have built it based on the same back end, so the user doesn’t need to download app for each single book. One of the good ways could be that publisher hires companies that does 3D trailers for them, so the platform can be share in the same base, furthermore, companies doesn’t need a special department just for the 3D model. Third, how pervasive can the AR equipment be? If everyone have their own wearable AR device then this isn’t a problem, it is very likely to happen in the future, for now everyone is still holding phones to see most of the AR applications, it could take years or decade to reach the ideal stage for 3D book trailer.

Not only on book trailer, it works for movie poster too like the target image serves as a detection base, just like the book cover, many different possibilities can be inspired by this project, such as 3D movie trailer that we just mentioned, products can have trailer or review, user can have a quick review to encourage them for purchase, a user’s guide can provide more instruction to user after purchase we may no longer rely on paper based instruction, since the AR instruction can assist you and reduce mistakes during the process and provide a more wholesome instruction, for educational purpose, by this way we can have a virtual instructor like the one we did to deliver more information for users, not only verbally but also visually, when reviewing a painting the AR app can play a short historical event, imagine you are looking at a photograph of WWII and airplanes flying out from the paint, troopers crawling on the ground and explosion shocks you like it’s really in front of you. I believe this will be one of the best ways to provide a virtual tour in museums, user now not only digest the information from words, but also images which contains more information. For music industry, user can enjoy a small concert to decide whether they like it or not, it works for record or the concert poster. All the above are all down to how can we deliver more information to the user and make them want the wholesome experience.

We do wish the day of having a wearable AR device coming soon, it is exciting that we are experiencing the transition stage of technology come true. Some may argue that the AR could ease the imagination of the traditional media, since we are having a lots of imagine space of a novel such as how the characters may looks like or how the scenes can be, but we believe this AR inspires more imagination than the space that it took over. We weren’t expect to have so much inspirations before doing this project, but now we are surprised by how much it brings to us!

Image: An-Chi He, Joshua Peterson



This is bold and this is strong. This is italic and this is emphasized. This is superscript text and this is subscript text. This is underlined and this is code: for (;;) { ... }. Finally, this is a link.

Heading Level 2

Heading Level 3

Heading Level 4

Heading Level 5
Heading Level 6


Fringilla nisl. Donec accumsan interdum nisi, quis tincidunt felis sagittis eget tempus euismod. Vestibulum ante ipsum primis in faucibus vestibulum. Blandit adipiscing eu felis iaculis volutpat ac adipiscing accumsan faucibus. Vestibulum ante ipsum primis in faucibus lorem ipsum dolor sit amet nullam adipiscing eu felis.


i = 0;

while (!deck.isInOrder()) {
    print 'Iteration ' + i;

print 'It took ' + i + ' iterations to sort the deck.';



  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.


  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.


  1. Dolor pulvinar etiam.
  2. Etiam vel felis viverra.
  3. Felis enim feugiat.
  4. Dolor pulvinar etiam.
  5. Etiam vel felis lorem.
  6. Felis enim et feugiat.





Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99


Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99


  • Disabled
  • Disabled