Wednesday, November 25, 2015

The Infinite Reality Show

I tend not to spend too much time talking about the long-term intentions of the work I'm trying to do with Astromech. Partly because I'm working it out as I go, and mostly because I've heard no end of "It's gonna be great!" exhortations in my life that turned out to be vaporware and I don't want to be That Guy.

I wanted Astromech to get to the point where it could provably do the essentials, before talking about the possibilities. In the last few weeks, those essentials have come together.

Data acquisition from commodity image sensors, check. Real-time DSP of the data using Fourier transforms, check. FITS Compression of the data using Harr transforms, finished a few days ago. Distributed comms working. Video comms working. Thousands of lines of '3D visualization' code.

So, what's the point? It's all about mapping reality. Let's take this in stages:

Observation

If you want to see the universe, you need to point a telescope at the sky. There's a great deal of optical and mechanical engineering involved, but you can shortcut that and buy a surprisingly good 6-inch Maksutov-Cassegrain off ebay for a few hundred bucks.

Most of science is about taking picture of fuzzy blobs.
Here's my first image of Saturn.

For most people, this is where the hobby ends. Every now and again the dust gets blown off, they observe Saturn, get their Wows, and no actual science is really achieved.

A smaller cadre of 'serious' amateur astronomers are out every night they can get, some with automated telescopes of surprising power and resolution. Some treat it like a professional photography shoot with less catwalk models and more heavenly bodies, and get quite a good income. But the vast amount of that data just sits on hard drives, not doing science either.


Aquisition

For science to happen, you have to write all the numbers down. Eyes are terrible scientific instruments, but it also turns out JPEG or the h264 compression algorithms are equally bad, literally smoothing out the most important data points.

It's why professional photographers currently make the best amateur astronomers, because they have access to acquisition devices (eg $4k Nikon cameras) which don't apply this consumer-grade degradation.

When you look into the details, what stands out is that the same hardware is often involved, it's the signal processing chain that's different.

Here is where we start having to consider our 'capabilities', in terms of how much CPU, memory, and bandwidth you have. If you point a high-resolution camera at the sky and just start recording raw, you will very quickly overwhelm your storage capacity, even if you have a RAID of terabyte drives.

Trust me on this. Been there, still haven't got the disk space back.

And the sad thing is, if like me you used "commodity" image capture hardware then the data is scientifically useless. Just pretty pictures.

Video imagery of the moon, taken through my telescope in '13.
You need 'raw' access to the pixel data, which is coming in a torrent. A flood. 4000x4000 images, one per few seconds, if you're using a DSLR camera and lucky imaging. Some people who look for meteors use 1024x720 video streams at high framerates. When you see a professional observatory, you're looking at a cool digital camera all right, but one that's literally sitting on it's own building-full of hard drives. That's a big memory card.


Signal Processing

If you want to turn that raw video into useful data, you have to bring some fearsome digital signal processing to bear. Just to clean up the noise. Just to run 'quality checks'. Then there's the mission-specific code (the meteor or comet detector algorithms, if that's what you're doing) and the compression you'll need to turn the torrent into a manageable flow you can actually keep.

Not just to store it to your hard drive. But also to "stream-cast" it to other observers. Video conferencing for telescopes.

Why? Because when strange things happen in the sky, the first thing astronomers do is call each other up and ask "Are you seeing this?". Some of those events are over with in seconds, and some of the greatest mysteries in astronomy persist because, basically, we can't get our asses into gear to respond fast enough.

Have we learned nothing from Twitter?

We can't wait for the data to get schlepped back home, and processed a week later. We need automated telescopes that can get excited, and call in help, while we're over the far side of the paddock having tea with the farmer's daughter.


Ground Truth

Up until now we've just been talking about slight improvements to the usual observer tasks. Stuff that's done already. Making the tools of Professional Astronomers more available to amateurs is nice (and as we've seen, that's really all in the DSP.) but what's the point?

Here we could diverge into talking about an algorithm called "Ray Bundle Adjustment", or even "Wave Phase Optics" but Ray's a complicated guy, so I'll sum it up:

If you want to reconstruct something in 3D, you need to take pictures of it from multiple angles. You probably guessed that already. There's a big chunk of math for how you combine all the images together, and reconstruct the original camera positions and errors. Those are the "bundles" that are "adjusted" until everything makes sense.

The more independent views you have on something, the better. Even for 2D imagery. Even aberrations in the sensors become useful, so long as they're consistent. It can create 'super-resolution".

Beyond that, there's "Light Field Cameras", which use a more thorough understanding of the nature of light to produce better images - specifically that traditional image sensors only record half the relevant information from the incoming photons.

Most camera sensors record - for each square 'pixel' of light - how much light fell on the sensor (intensity) and it's colour (wavelength). What you don't get is the direction of the incident photon, (it's just assumed) or its phase. 

For a very long time we thought those other components weren't important, mostly because the human eye can't resolve that information. Insects can perceive these qualities, though. Bees can see polarization, and compound eyes are naturally good at encoding photon direction. We couldn't, so we didn't build our telescopes or optical theory with that in mind.

Plus, the math is hard. You have to do the equivalent of 4D partial Fourier transforms. Who wants that?

But when you work through it, you realize that you can consider every telescope pointed at the sky to be one element of a planet-wide wave-phase optics "compound eye" with the existing hardware. (and maybe a polarization filter or two)

All we need to do (ha!) is connect together the computers of everyone currently pointing a telescope at the sky, and run a global wave-phase computation on all that data, in real-time. (I might be glossing over a few minor critical details - learn enough to prove me wrong.)

This is not beyond the capability of our machines. Not anymore. The hardware is there. The software isn't. This is what I've been working on with Astromech. A social data acquisition system that assumes you're not doing this alone.

What you get out of this is "Ground Truth", a term that mostly comes from the land-surveyors who are used to pointing fairly short-ranged flying cameras at a very nearby planet. But it's the same problem.

This is the stage we can finally say we're "Mapping." Once we got enough good photos of the asteroid Ida, we constructed a topographical map. Once we got enough information on it's orbital mechanics, we could predict where it would be.

Fundamentally, that allows us to prove our mastery of the maps by asking questions like "If I point a telescope at Ida right now, using these co-ordinates, what would I see?"

ie: Can I see their house from here?


Simulation

To really answer that question means you have a 3D-engine capable of rendering the object using all known information. If we assume Ida hadn't changed much in terms of surface features, then it's pretty easy to "redraw" the asteroid at the position and orientation that the orbital mechanics says.

Then you just apply all the usual lighting equations, and you'll have a damn passable-looking asteroid on your screen.

But it's not 'real' anymore. Not exactly. It's not an image that anyone has taken in reality. It's a simulation. A computer-enhanced hallucination. A flight of the imagination.

Good simulations encode all the physically relevant parameters, and the main point of them is to provide a rigorous test of the phrase "That's funny..." ("How funny is it, exactly?")

Because by now humans are pretty good at predicting the way rocks tumble. It's kind of our thing. When rocks suddenly act in a way other than predicted (than simulated) it indicates that we've got something wrong. Or something interesting is going on.

And being wrong, or finding something interesting; that's Science!

Simulations are also the only way that most of us are ever going to "travel" to these places. Thankfully our brains are wired such that we can hang up our suspenders of disbelief long enough to forget where we are. Imagination plays tricks on us. There are people right now (in VR headsets viewing Curiosity data) who've probably forgotten they're not on Mars.

Used the right way, that's a gift beyond measure.


Sentinels

About the most interesting events we can see is stars blinking on and off when they shouldn't be.

Yes, this happens. A lot. Sometimes stars just explode.

Then there's all kinds of 'dimming' events that have little to do with the star itself, just something else passing in front of it. We tend to find exoplanets via transits, for example. Black holes in free space create 'gravitational lenses' that distort the stars behind them like a funhouse mirror, and we might like to know where they are, exactly.

Lets say we wanted to watch all the stars, in case they do something weird. That's a big job. How big?

Hell, if we just assign the stars in our galaxy and we get every single person on the planet trained as an astronomer, then each person has to watch vigil over 20 stars. (assuming they could see them at some wavelength.) If we're assigning galaxies too, then everyone gets 10,000 of those.

Please consider that a moment. If every human were assigned their share of known galaxies, you'd have 10,000 galaxies to watch over. How many do you think you'd get done in a night? How long 'till you checked the same one twice and noticed any upset?

We're gonna need some help on this.

And really, there's only one answer. To create little computer programs, based on all our data and simulations and task 200 billion threads to watch over the stars for us, and send a tweet when something funny happens.

We can't even keep up with NetFlix, how the hell are we going to keep up with the constantly-running terra-pixel sky-show that is our universe?

I've got a background in AI, but I'll skip the mechanics and go straight to the poetics; we will create a trillion digital dreamers -  little AIs that live on starlight, on the information it brings, who are most happy when they can see their allocated dot, and spend all their time imagining what it should look like, and comparing that against the reality. Some dreamers expect the mundane, others look for the fantastic, and bit by bit, this ocean of correlated dreamers will create our great map of the universe.

Every asteroid. Every comet. Every errant speck of light. Every solar prominence or close approach. We are on the verge of creating this map, and the sentinels who will watch over the stars for us, to keep it accurate.

There's not a lot of choice.



Tuesday, November 17, 2015

Harr Harr!

Here's today's development screenshot of Astromech: (from the virtual DSP desk.)


An image that will give your PNG decompressor conniptions, no doubt. The middle screen-full of leafy trees is a live webcam feed from out my window. The pink lines all across it are because it's a shitty webcam that cost $6 off ebay.

The left-hand screen is a 256x256 real-time Fast-Fourier Transform of the webcam luminance. That's not big news, Astromech has always done that. Its first trick.

The right-hand screen is the new thing for today. It's a 512x512 "H-Transform" which likely originally stands for "Two-Dimensional Harr Transform". I also call it the "Hubble Transform", because it's the basis of the compression format the Hubble data team invented in order to distribute 600Gb of their pretty pictures.

The full text I'm following here is Tiled Image Convention for Storing Compressed Images in FITS Binary Tables  published by NASA.

Don't let that NASA appellation fool you into thinking there's anything hard about the H-Transform. Compared to the FFT or Cosine Transform or Huffman coding it's very, very simple. And the best thing about the H-Transform is that it's parallelizable on WebGL. (as you can see.)

That's how I'm doing this in real time, (about 12 frames per second I'd guess, limited by webcam speed) in my browser, and my CPU usage is 20%.

Why the H-Transform? Why not just use something browser-supplied like h264, or V8 or a stream of JPEG/PNG images? (MJPEG!) which is built into most modern browsers? Well, in a nutshell, because nice as they are, they're not "scientific".

There a really big difference between a compressor that optimizes for the human perceptual system, and a compressor that tries to preserve the scientific integrity of the source data. The H-Transform is the second type.

Similar to a 'MIPMap', the H-Transform encodes a pyramid of lower-resolution (but higher entropy)
versions of the source image into the lower-left corner, like a fractal.
The larger 'residual' areas become easier to compress.


That's why NASA trusts data that has been stored in that format. It has certain very nice mathematical properties. It's a 'lossless' compressor, but one with a tuneable 'noise floor'. If that seems a contradiction, welcome to the magic of the quantized H-transform, where 60:1 compression ratios are possible.

There's a couple of stages to go before that image on the right is turned into a FITS file, but the hard part is done and the rest is just shuffling bits around. Well, assuming the browser will let me save a stream of data to a file. That's really tricky, it seems.

Update: 22/Nov

The whole thing provably works now, since I've also implemented the inverse H-transform. (there were a few bugs)

The inverse of the transformed cat is also a cat. Well, you'd expect that, surely?
Basically in this example, the middle webcam image is encoded into the right-hand image (which looks fractal yet empty - that's the H- transform) and then that is run through the Inverse transform (a separate bit of code that does everything in a different order, using the big mostly-empty texture) to go in the left-hand window.

It's almost too easy.

And so the fact that the two left images look boringly identical is a good thing, given the (poor quality to begin with) data has been mangled twice in between by me. Cat sitting on warm computer staring at cursor. It's a common test case around here.

Friday, November 13, 2015

Fire Map 2015

First of all, sorry to everyone who uses my Fire Map. I hadn't noticed that the NSW dots were no longer showing up. I just fixed that. Apologies.

Since I'm no longer connected with departments that produce internal reports on the coming fire season, I really don't have any idea what the forecast for this Christmas is. However, I've just noticed the fire-line across the Northern Territory:



That's a lot of red dots, big enough to be visible from space. Those are not good dots.

And from what I remember with my flawed brain, that staggered line is a trend-setter that continues south in waves over the next few months, as those heat conditions spread.

So, it's probably a good time to consider the history of the Fire Map.

The Past
When I started the map, each service ran their own state incident map (and still do) and I knew that there was no federal agency that had the remit to make a country-wide map. Google had a fairly good relationship with NSWRFS and had just added Victoria, but had no plans to add Queensland or beyond. There seemed only one thing to do...

The Present
Frankly, I've been neglecting the Fire Map for Astromech and day jobs. It just kind of sits there, working away, and really only needs occasional checks and updates when one of the services changes their servers in some way. Fortunately, you can always depend on the inertia of government departments, and I've gone for years without to much issue.

The Future
I'm not sure. I just noticed that the Northern Territory finally has their own incident map / feed, and the perfectionist in me always felt annoyed that there was one big empty chunk on the map. I may have to fix that, at which point the tapestry will be complete.

But long term, the map has no future. I nearly shut it down last year when the hotspots went away for a while, meaning there was no effective difference between my map and the Google Crisis Response map, which has come along enormously in the last few years. I was a momentary expert three years ago, but I really haven't kept up. 

I shouldn't be doing this. The only reason I do is because there doesn't seem to be anyone else with the same focus, and the needed abilities.

I'd love to hand it on to some official organization that can form relationships with the state agencies that source the data, and thereby have more warning of server changes than "crap, the dots aren't working today." But that doesn't seem likely either. I once had grand ideas about doing bushfire prediction, (which I still think is very possible) but that was when I had access to data, and the people for whom those answers were relevant.

I'd be interested in advice for what I should do in the future. Is the map still needed?


Update: NTFRS data online

It would have to be Humpty Doo.
Adding the Northern Territory to the fire map took almost 1/3 of a packet of Anzac biscuits to accomplish (and two years of waiting). That means the map is complete. Every state is on there, making it, truly, the "Australian Fire Map" at last.

My work here is done.


Why Video Games were Not Art

I always thought that computer games were a form of Art. Today, I learned why I was wrong.

First, let's start with a definition of "Art" that is mostly "That which is shown in museums and galleries and whatnot." Statues are shown in museums and galleries, therefore statues are art. Paintings, sculpture, movies, songs. Note these are all forms, rather than specific things.

So, why aren't computer games Art? Because, in a nutshell, Museums couldn't exhibit them, without fear of prosecution from the rabid armies of copyright lawyers engaged by the industry to protect their products.

That's it. QED.

It didn't matter how many soulful designers cried that they were doing more than just extracting quarters by addicting teenagers to blinky lights; they were being shivved in the back by their own legal departments, who enjoyed wielding the power of the precious DMCA.

If video games were Art, or were recognized to have cultural significance, then they would have status beyond that of mere "product" and society at large would have a different relationship, with more rights of re-use. Can't have that.

I didn't realize a lot of the internals, before I read the excellent article:
Understanding this years biggest video game copyright ruling at Gamasutra.

The good news, since the US Copyright Court (ugh, from the "why is this even a thing" category) has now said that Museums can show off old games without fear, there might one day be an exhibit of 'classic games' at your local museum, as perhaps should have been possible all along?!

Sometimes you only notice how bad the stupid got when it takes a step back. And you think "good start, and another please?"

Sunday, November 8, 2015

Astromech - The Road to Beta2

Going quiet means I've been getting things done on Astromech. I have a set of specific features I want in the next beta, and in the last week nearly all of them have reached a semi-stable point.

Probably the best single demo of this is my little homage to the Caffeine molecule:





But first, the setbacks. The big one is; I've had to seriously question my use of Google Drive.

Anyone who saw the original video noticed how heavily I relied on Google Drive for the task of storing the bulk assets in each 'level' as well as building a collaborative editor (using the Docs realtime API) to set up the scene / load script.

Here's the stage I got to with that, before some rather bad news broke:

A mobile-friendly way to edit assets and  DSP 'circuits', backed by the Google real-time collaborative API.
Shame it will probably never see the light of day.
And that's just part of it. There's also the Panel designer - a hierarchical 'box layout' editor for all those cool LCARS-like consoles that litter the Astromech levels.

But unfortunately, Google has announced that they will disable file hosting from Google Drive shortly. I ranted a little about that in my previous post.

That has two very specific impacts on the Astromech 'GUI' editor. It means the files that it creates can't be read anonymously anymore. So. any Astromech levels based on a script stored in Google Docs will not be accessible to everyone. That's bad enough on it's own to kill that part of the project dead.

What's the point of collaboratively editing a "public world" file if the world's public can't read it?

And where do you put all the resources files it references? On another service? Then what's the point of using all the Google Docs API's if the 'real' data is elsewhere?

Shoving everything into one directory made things nice and manageable, using relative links. Once you've got your resources scattered across half the internet using absolute URI's, it makes things so much harder.

It's not just that I'd have to add a "Save As..." button to the Google Drive app, I have to re-think the entire premise of how users collaboratively store and work with terabytes of data. Instead of a central dumb (but reliable) fileserver and peer-to-peer clients, I'll probably need a peer-to-peer _server_ layer as well. ie: I need to replace Google by Christmas.

The old levels still work for now, but the access they depend on is deprecated and goes away next year. But it's the wasted effort in that direction that really hurts. Hopefully I can salvage most of the UI and editors, while backing them with a different datastore.

Then, there was the whole getUserMedia http:// deprecation thing I had to deal with. Within months, "powerful browser features" (which is basically everything I use.) will not work from http:// servers. Only https://.

This really broke me for a couple of days (I even got into a discussion with the security@chromium.org list) because it implied that running Astromech would only be possible from the Internet by paying money to Versign-derivatives. Not, for example, on your own goddamn own computer.

I settled down a bit when it was pointed out that localhost: is supposed to be considered "secure", (even if it only uses http:// without the SSL) So you will still be able to download and run Astromech on your own machine and use all the features. You can imagine how the alternative would have been maddening.

However, this still leaves the localnet in limbo. It's no longer clear how you'd run the software on your own desktop and access it, for example, from your own iPad over your own WiFi network. (Why, it's just as simple as creating your own SSL certificate authority for signing local machines, and then installing the certificates on each device, of course... why are you complaining?)

It was always in the back of my mind that a piece of Astronomy software that only really worked indoors while connected to high-speed internet might not be as useful as it could be. (instead of, say, in a dark paddock filled with amateur astronomers bristling with advanced imaging equipment and local bandwidth but poor global internet connectivity.)

I really don't want Astromech to be a "local webserver" install, for every individual user/machine. It should be more like running a minecraft server. If you need to install local servers everywhere to get a browser app to work, then what's the point of doing it in a browser? Why not just write a full application?

And besides, it seems really counter-intuitive that the only way to work with/around the 'increased browser security' is to start installing local code (eg, a node.js micro-server) with full binary access to the machine. That's just security whack-a-mole. If the machine gets boned through an exploit in my code, then it's not their fault for leading me down that path, obviously.

But the browser makers are determined to deprecate http:// and that's that. It doesn't matter that https:// is flawed, costly, inefficient, and creates barriers to entry.

OK, so now the good stuff that's made it into Astromech in the last few weeks:


iOS + Edge support
Astromech now 'works' on iOS9, to the extent it will load and render the scene using WebGL. What it doesn't do very well (or at all) is replace the keyboard/mouse control scheme with something that functions equivalently using touch.  I'm probably going the little "thumbsticks in the screen corners" route there, as soon as I get the time.

Microsoft Edge is running Astromech fairly well, better than IE did, but it also has some feature gaps (like getUserMedia / WebRTC) that effectively disable some of the more advanced features.

Chrome and Mozilla are still the 'preferred' browser, but all roads are slowly leading to HTML5 compatibility across all devices.


Improved Blender/WebGL shaders
The first-gen model loader did well with geometry, but badly with surfaces. For a start, only the first texture worked, and there was no real lighting model. So, scenes looked very different in Astromech from how they originally looked in Blender, even if the geometry was correct.

The 'shader compiler' I wrote has been extended with a full multi-source specular lighting model, with 'sun' and 'point' lights. Technically it does a Lambert/Blinn-Phong pass with fixed lights.

So, now you can export a fairly generic existing Blender model (instead of carefully building one specifically for Astromech) and it will mostly work as you expected. Common surface materials work. Multiple scene textures work. Bump maps sort-of (they have the common view-independence problem because of the lack of tangent vectors in the collada export, so the bumps always point 'up' instead of 'out', though there's probably a way to solve that. Good for floors though.)

I still don't have a great solution for transparent surfaces, but then, neither does anyone else.


Multiple Scene Models
Version 1 only properly loaded a single 3D model as the 'primary scene'. That's been fixed, so you can load an arbitrary number of collada files, and instance them multiple times within the scene at multiple locations.

eg: In the "Atomic Caffeine" demo, each of the four atoms was modelled/coloured in Blender, and then instanced into the scene as many times as the entire molecule needed.

New features sometimes magnify minor old problems; in this case the lack of a global lighting model. Since each 'scene' model carries its own lights in its own reference frame, obvious visual inconsistencies occur when you put several models together and rotate some of them. (Although, less jarring than I'd have thought.)

Fully dynamic lighting is a major overhead with diminishing returns. So, I'll probably go for a compromise, with only a few global dynamic lights.


Cannon.js Physics Engine
The other side of the multiple-model system is the ability to define a 'physics proxy' (usually a box or sphere with properties of mass and friction) to which the position of the 3D models is attached.

I've chosen the cannon.js physics system to do the heavy lifting. It can connect the proxy objects together with 'hinges', 'springs', and other physical constraints like gravity, and then model the physics over time and update the objects.

It's extremely efficient (the solver it uses is very advanced) although there are severe practical limits to just how much you can do in real-time. But a little physics is a great way to add some life to an otherwise static scene, and give the user the sense that they're there, and bumping into things.


Scriptable UI
I've just about finished exposing all the things that Astromech can do as scriptable elements - as opposed to my early examples that used lots of  hard-coded javascipt.

It's slightly less flexible than the raw javascript - at least until I create a 'module' system capable of safely loading arbitrary code. It's still just a set of pre-approved LEGO blocks you can arrange in various ways, but at least the set of blocks is getting bigger.

Scripts don't all have to run on load. The script can define UI "command buttons" which run parts of the script later... which might load more resources and create new buttons. A common use of command buttons is to provide "teleport" options which can jump you around the map.

In practice, you can already build a 'conversation tree' system which offers choices dependent on previous choices. (All the buttons would be pre-defined, just shown and hidden using commands activated by other buttons.)


Social 'presence' and messaging.
The chat system has been functioning for a couple of versions now, based on a websocket 'pub-sub' server that I'm running on an OpenShift cartridge. (Thanks, RedHat!) I've gone through a couple of revisions of this system, and it's been stable and reliable for months.

Previously, you'd get a 'chat message' when someone connected to the channel ("hailing frequencies open") but in the background the networking code always had the full list of the other participants this entire time, you just couldn't see them. Now, the right-hand of the screen is one long column of everyone else in the level with you.

This makes everything feel a lot more MMORPG, and future extensions will be things like "friends lists" and private instances that build on this social side, since there's going to be obvious problems if a 'level' gets too popular.


File Transfer & Videoconferencing
The first features the social list made possible was inter-user private chat (easy) followed by file transfer (not so easy, but mostly working) and video conferencing. (just got the prototype working)

It's a core idea of astromech that you should be able to exchange data with other people. This is an essential part of that plan.

The file transfer I'm particularly proud of. To 'transmit', the sender just drags a file out of the File Manager and drops it on the button for the intended recipient.  An 'offer' is then sent to the receiver, and turns up in their corresponding list of options for the sender.

If the receiver clicks on this offer button, the file is downloaded to the browser's "Temporary FileSystem" (you get a little progress message while the transfer is in progress) and then the recipient can either click on the button a second time which open the (now local) file in the browser, or they can drag the file icon back out of the browser to the filesystem again.

In summary, one user drags the file into their browser. The other user accepts the offer, and then can drag the file back out of their browser. (Well, Chrome) I don't think I can make it simpler.

Remember, this is peer-to-peer. Although right now all the comms goes through the chat relay, (as private messages) but I have the RTC channels working, so I intend to make that the preferred transport to make it truly peer-to-peer, and reserve the relay server as the 'fallback'.


Voice Recognition
This was nearly a 'freebie', in that I went - in one morning - from not knowing that browsers offer a full voice-recognition engine javascript API, to having it working by lunchtime.

Any 'command button' can be given an array of "speech" strings that, if heard by the engine, activates that command button. It's that easy.

It's good to have an optional prefix word that wakes up the engine but can be missed, because that happens a lot. Originally I used "computer" (duh) but soon changed it to "Scottie!" after shouting at my machine for a little while to transport me to new locations and switch on parts of the engine. Feels much more natural, somehow.



I could go on for pages about all the things I want to do next, the improvement and changes, but I think we'll just stick with what I've actually done so far.

The major features are now mostly in. There's a ton of clean-up work and major bugfixes that need doing before release, but no more super-bleeding-edge experimental features. I did the hard stuff first.

A 'Beta2' release isn't far off now. I'm trying to be quick, before the ground shifts under my feet again. It's not easy doing all this single-handed, but I'll get there.