First part arrives

The first part has come back from Shapeways. I actually ordered this a week after the visor component, but there has been a backlog on the machine that prints transparent materials, so it won’t be in until early next week.

shapeways-part

Not bad. Near as I can tell it’s exactly to specifications and it feels pretty tough.

Two related observations …

  1. Shapeways, when you send me a part that fits in a 12 x 8 x 3cm baggie, please don’t pack it in a 20 x 15 x 12cm box and charge me $20 shipping. Use a smaller, cheaper, box.
  2. I got home to find a UPS card in my letterbox. Oh no, I thought, another game of courier tag. But no, they’d left it at a nearby 7-Eleven for pickup. Brilliant, UPS you guys rock! Fedex, please do something similar.

Final design

Once you get the hang of FreeCAD’s features you can make some nice-looking designs.

attachment

The STL file for the dark green attachment mechanism has been sent to Shapeways, and should be returning in physical form in a few weeks. Then we’ll see if the pieces fit together.


CAD results

FreeCAD has a few rough edges, but it delivered the goods in the end.

visor

This is the parabolic visor, with integrated attachment points. Let’s submit it to Shapeways and see how it comes out in transparent plastic.


FreeCAD

Last year I posted an article about a baseball cap head-up display. I’ve been looking to improve on it, and have decided to use CAD and 3D printing to do it properly. So the first step was to learn a CAD package, preferably something free and available under Linux.

I thought I’d try FreeCAD. I don’t know whether this is common among CAD packages, but I really enjoyed the way its Sketcher tool uses constraint-based rules to specify the design. Instead of specifying (x,y) coordinates for each point, you first draw approximate lines and then start imposing constraints. For example, you might specify that a line is 15mm long, or that two lines must be angled 60 degrees from each other, or that two points must have a horizontal separation of 20mm.

Every time a constraint is added the degrees-of-freedom counter goes down, and when it hits zero you have a fully-constrained design, ready for export.

At least, that’s the theory. In practice, FreeCAD is still in beta, and I ran into a few problems …

  • Crashes. Frequent crashes. They’re not hard to reproduce, and crash bugs are pretty easy to catch and fix, so I’m surprised they’re still around. Or maybe they’re not easy to fix, in which case an auto-save feature is needed urgently.
  • Performance. I was sketching the cross-section of a 75mm radius parabolic dish at 1mm intervals, and as I was getting to 150 points it was taking over 10 seconds to re-calculate after adding each new point. Given that a basic PC can do over a billion floating point operations per second, that’s really poor, especially since half the points were locked with explicit (x,y) coordinates and didn’t need re-calculating.
  • Lack of feedback. After entering all the points it becomes a game of tracking down and removing all the degrees of freedom. But the interface doesn’t show which points are unconstrained, so a lot of guesswork is required. That’s a pain when there are 150 points and when you’ve accidentally put two points in the same location and one of them is unconstrained.
  • Needs another constraint type. When designing 3D objects it’s fairly common to construct a “wall” around a complex shape, for example when designing a hollow object. So it would be really nice to be able to constrain two lines to be parallel and separated by a certain distance. FreeCAD lets you set two lines to be parallel, and you can constrain a point to be a certain distance from a line, and if you use both of these simultaneously you can achieve the desired result. But it’s easy to make mistakes and the rules are difficult to maintain, so a single constraint would be much better.
  • Needs B-splines. I realize it can be difficult to calculate constraints for polynomial curves, but if you’re making a 3D object odds are it needs a curved surface of some sort. My parabolic dish certainly does. How about constraining the points based on the straight line segments, but rendering them as B-splines?

Anyway, I’ve now finished drawing the parabolic curve. Luckily I realized that you only need half a parabola to create a 3D dish, since it gets rotated around its Y axis. It remains to be seen whether the 3D editing tools in FreeCAD are up to scratch.

 


notr.tv

Just launched a new website, notr.tv

It turns out that it’s possible to overlay HTML on top of YouTube players in most browsers (with smartphone browsers being the main exception). And the YouTube API gives you a lot of information about video playback. Combine the two, and you can annotate other people’s videos.

The obvious use is generating Downfall parodies. And Osama bin Laden and Pope parodies.

But there is also a lot of content out there that isn’t in English. And a lot of YouTube users who don’t understand English. So there is a plenty of potential for the crowd-sourcing of subtitles.

At the end of the day, I don’t know how this service is going to be used, but it will be interesting to find out.


Hedgehogging

Well, the PhD was submitted for examination in early July, and within a fortnight I had started work at a company called the Portland House Group. The job came out of nowhere and I couldn’t have asked for better timing.

The company is a hedge fund, and I’m working on trading algorithms. No high-frequency trading, so don’t blame me for any flash crashes. It’s a new industry for me, but they wanted someone with code breaking experience, and that’s was my first job out of university.

Anyway, that probably means even fewer posts to this blog in the coming months.


The Winklevoss strategy

The Winklevoss twins are my heroes.

They made a half-arsed attempt at a marginal business idea, which Mark Zuckerberg stole, improved, and actually implemented. At the end of the day they basically did nothing, spent a total of $400 (plus legal fees!), and wound up receiving $65 million.

In terms of return-on-effort we are talking six figures per hour! Legendary.

So here’s my new business strategy:

  1. Come up with a poorly-thought-out business idea.
  2. Find a talented developer/engineer and trick them into stealing it.
  3. Do nothing while they make the idea viable and put in the hundreds of hours needed to make it work.
  4. Sue them.

To wit: I think 3D printing is going to be huge – nearly any object that can be described can also be manufactured, with little or no human labour. Once 3D printers becomes ubiquitous and commoditized the main bottleneck will be on the design side.

So here’s the idea: set up a website where people can post their requirements for an object, and how much they’re willing to pay. Other people post 3D designs that implement the object. The best design wins the prize. Basically, the 99designs model applied to 3D objects.

It’ll be huge. Someone should totally do it.

(Dear lawyers: In terms of jurisdiction, this was written in Victoria, Australia; is hosted somewhere in the US; and is visible in every country that doesn’t block WordPress. Go figure.)


Pedestrian polling

A holy grail in the field of geospatial science is the emotion map. Ideally, this is a real-time map showing where people are located and how they feel, presumably colour-coded by emotion. I don’t know if they have any practical uses, but they’d be a nice way to gauge the mood of a city.

The problem is, these maps are almost impossible to generate. Previous attempts have relied on volunteers carrying some kind of recording device which may periodically ask them how they feel, or perhaps record their heart rate and skin conductivity and try to infer their emotional state from that. Unfortunately, these approaches result in a very small sample size, typically skewed toward university students, and they are never on-going projects.

So I thought, if you want to know how people feel, why not just ask them?

The idea is to place a highway-style lane-selection sign over a reasonably wide stretch of footpath, with a motion sensor covering each of the “lanes”. Whenever someone walks under one of the three options it beeps and records their selection. Sure, it wouldn’t record spatial information like an ideal emotion map, but you’d get a large and unbiased population sample, and you could always deploy a few of them around a city.

The nice thing about this setup is that once it’s deployed it’s fairly easy to change the signs and ask different questions. I am feeling … focussed/frisky/meh. AFL premiers 2012 … Hawks/Swans/Who cares?

Not only would it provide useful data to city planners, I suspect it would be popular with pedestrians. I mean, how nice would it be to have an interactive city that cares about your opinions?


Survey of head-up display technologies

As cool as my baseball cap headset looks, it is totally impractical in the real world. Not surprising, since it was built using off-the-shelf components.

But I was wondering, if I was working with people who actually knew what they were doing, what would be possible? In particular, what are the options for overlaying distant-focus images on a person’s field of view? A fresnel lens + half-mirrored reflector works, but the image quality isn’t great and it’s kind of bulky, so there must be better options.

Source: Laster Technologies

The method used by Laster Technologies of France looks sensible. The focusing and reflection are handled by a single half-mirrored curved reflecting surface, presumably a parabolic segment. It’s cheap and easy to implement, and there’s not much that can go wrong. I’m not sure if it’s compact enough to be used in glasses, but it could certainly be used in a hat- or helmet-mounted configuration.

I wonder how clearly the contents of the display can be seen by others, given that the image is being projected forwards and not entirely reflected. With a baseball cap you could probably use the bill to block line-of-sight to the OLED, but it could be a problem with glasses.

Source: SBG Labs

The DigiLens by SBG Labs uses some pretty amazing technology. You really need to watch their video to see what’s going on, but here’s the gist of it: essentially, you can duplicate any lens arrangement using a hologram (which is far more compact than the lenses would be). Unfortunately, holograms only work with monochromatic light, so full-colour images haven’t been possible. The SBG solution is to use switchable holograms (which I didn’t know were possible). The red, green, and blue components of the image are cycled in rapid succession and relayed to the reflective element. The reflective element consists of a sandwich of three switchable holograms which cycle between reflective/transparent in sync with the images. If you cycle this fast enough, apparently your eyes see it as a full-colour image. I’m guessing you could use the same technology to generate full-colour 3D holograms, but that’s another topic.

All very impressive, but I have some concerns. First, it sounds expensive. Second, power consumption is going to be higher than Laster’s solution because the reflective element is constantly switching. Third, rapidly-switched RGB is supposed to look like full colour, but I’ll believe it when I see it. And finally, I wonder what sort of image quality you get when you use monochromatic RGB. I know that laser light looks weird, but that may be due to it being coherent rather than monochromatic.

Source: Vuzix

The Vuzix STAR 1200 uses “patented quantum optic see-thru technology”, which means nothing to me. Does anyone know what technique they actually use?

Source: Lumus

Same problem with Lumus. They use a “patented LOE (Light-guide Optical Element) technology”, which might sound good from a marketing perspective, but it tells you nothing about how they actually work. Because it “shatters the perceived laws of conventional optics” I suspect they may be using SBG’s technology or a variant of it, but since I can’t be bothered doing a patent search, I don’t know for sure.

Are there any technologies I’ve missed?


Improved optics

Note the new visor.

Two of the problems with the original baseball cap head-up display were the dimness of the image and the difficulty of aligning the reflective screens. They turned out to be fairly easy to fix.

First, the two reflective screens, one for each eye, were replaced with a single screen. Going from two surfaces, each with two degrees of freedom, to a single surface with one degree of freedom makes it much easier to line up the image.

The original motivation for using two screens was to have the option of varying parallax to control the image distance. Nice in theory, but too hard to use in practice. I also think two screens looks cooler, but given how lame the whole setup looks, I don’t think it matters.

The dimness problem was solved by using a more reflective material. The iPhone case protector film was replaced with 15% VLT mirror-tint film, the stuff you put on windows. 15% VLT means 15 percent of visible light is transmitted, and presumably the remaining 85 percent is reflected. Melbourne has been overcast the past few days, so I haven’t had a chance to test it in strong light, but indoors it works really well.

Now that the screens are lined up properly and I can clearly see the image, I noticed another thing: more by luck than anything else, when the phone is displaying a live video feed, it lines up exactly with the real world. In a dark-ish room where the video is the main source of light, it’s good enough to navigate by. And it means augmented reality applications such as Wikitude will display points of interest in the correct location. Nice.

One final problem with the original design was the weight of the phone on the end of the cap. I have an HTC Desire Z with a slide-out keyboard, and it’s a fairly hefty device. I speculated that a lighter phone such as an iPhone or a keyboard-less Android might be more comfortable. Well, it turns out they are. A lot more comfortable.

They also avoid another Desire Z problem – when attached to the cap, the volume control buttons of the Desire Z rest on the bulldog clip. That causes the “volume down” button to remain depressed, putting the phone into vibrate mode – and making the phone buzz continually to let me know. The other phones I tested are either too light to depress the buttons, or the buttons are located somewhere else. So now I’m tempted to buy a cheap second-hand Android as a dedicated augmented reality device.