This fall semester saw improvements to multiple subsystems as we prepare to build the flight units in the spring. Our optical navigation system is one such area. We have published multiple papers about our research into this navigation method in the past. The most significant improvements this semester were in the area of image processing and recognition. In the past, our system has been capable of recognizing and distinguishing between the Sun, Earth, and Moon, but only when these bodies were visible as full circles.
As you can see in the image slideshow below, that is no longer the case. Our algorithm is now capable of recognizing partial and even crescent bodies, and can even detect and distinguish the Earth and Moon from each other when they overlap. Coupled with the already completed Kalman filters for estimation of attitude and position from this data, the optical navigation algorithm is now complete.
Several tasks remain for the spring semester. First, we want to make improvements to the hardware interface with the Raspberry Pi cameras. We already havey implemented a camera multiplexer for the Raspberry Pi, and captured images of the Sun and Moon with these cameras in the field, but the rate of image capture and camera calibration can be improved.
Second, we need to begin testing the combined hardware and software. The Kalman filters have been tested with representative data, and the image processing has been tested with images taken in the field, but the Kalman filters need to be tested using data computed by processing actual images of the Sun, Earth, and Moon. The main obstacle to doing this is finding a stream of images of the Sun, Earth, and Moon from the same spacecraft along a cislunar trajectory. Individual images of any of these bodies are readily available, but we need:
- Images of the Sun, Earth, and Moon from roughly the same location.
- Knowledge of the angular separation between the camera facing when each image was taken.
- Many times at different locations in cislunar space.
One way to obtain such images is to simulate them. Mission planning software such as STK or GMAT, or planetarium software such as Stellarium, is capable of doing this. Last year, we published a short video showing the spacecraft point of view for part of a simulated Cislunar Explorers trajectory using STK. Simulated images of the Sun, Earth, and Moon could be created in a similar way. This could provide us with arbitrarily many images of the three bodies, and can be easily repeated for different trajectories.
We will do this next semester, but it only tests the software, because the images are simulated and not captured with the spacecraft cameras.
We can also repeat field tests of the Sun and Moon to test our improved image capture rate and camera calibration, but there is an obvious obstacle to capturing images of the Earth from its surface, so it is not possible for us to collect fully representative data from here on Earth. So, this mainly tests the hardware.
In order to test the hardware and software together, we need to create a representative environment for the spacecraft to spin in and take images of a fake Sun, Earth, and Moon. Fortunately, we already have a spinning air bearing test rig for our slosh damping measurements. Our existing CubeSat EDU structure can rest on it and spin exactly as the spacecraft will in orbit. We will create a sort of darkroom/planetarium around the model, for the cameras to capture images of and feed to the navigation algorithm. Similar techniques have been used by other researchers to test star trackers in the past, with projected starfields on the walls of a darkroom.
We look forward to confronting this and other challenges in the spring semester, as we move towards final testing and integration of flight hardware.