In this tutorial we will check how to obtain an image from a camera and display it on a LCD, using a Sipeed M1 board and MicroPython.
Introduction
In this tutorial we will check how to obtain an image from a camera and display it on a LCD, using a Sipeed M1 board and MicroPython.
Recall from this introductory post that the Sipeed M1 module is powered by a Kendryte K210 SoC, which allows the development of computer vision applications.
The Sipeed M1 dock suit board used in these tests already ships with both peripherals: a OV2640 camera and a 2.4 inches LCD. Additionally, the board already contains connectors for both devices in the PCB, making it easy to get started without the need for soldering.
At the time of writing, the board also comes flashed with MicroPython, so we can start programming it out of the box. In case your board doesn’t have MicroPython installed, you can check here a guide on how to do it.
The code
We will start our code by importing the modules we will need. First, we will import the sensor module, which has the functionalities needed to interact with the camera.
import sensor
We will also import the lcd module, which exposes the functionalities for configuring and interacting with the display.
import lcd
After this, the first thing we will do is initializing the LCD. We will do this with a call to the init function of the lcd module.
As can be seen here, this function has some optional arguments, which have default values. Nonetheless, for our simple use case, we will not pass any parameters since the default values are enough.
lcd.init()
After this we will initialize the camera with a call to the reset function from the sensor module. This function takes no arguments.
sensor.reset()
Then we will set the frame format of the camera with a call to the set_pixformat function from the sensor module. As input, this function receives the frame format to be used.
Our camera supports the RGB565 format, which is the format recommended on the documentation.
sensor.set_pixformat(sensor.RGB565)
We also need to set the frame size, which can be done with a call to the set_framesize function. As input, the function receives the frame size to be used.
Our camera suports the QVGA size, which is the recommended one for the screen resolution we are using, as can be seen in the documentation.
sensor.set_framesize(sensor.QVGA)
To start capturing images we simply need to call the run function from the sensor module, passing as input the value 1.
sensor.run(1)
Then, to get an image from the camera, we need to call the snapshot function. This function takes no arguments and it will return as output an object of class Image.
We can pass the output of the previous function call directly to the display function of the lcd module. This function will display the image on the LCD.
lcd.display(sensor.snapshot())
To keep taking snapshots and displaying them on the LCD, we just need to send the previous command as many times as we want. The final code can be seen below.
import sensor
import lcd
lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.run(1)
lcd.display(sensor.snapshot())
Testing the code
To test the previous script, simply run it on your board, after having both the camera and the LCD connected.
You can use a serial tool of your choice to connect to the board and send the commands in the MicroPython prompt. In my case I’ve used uPyCraft, a MicroPython IDE. You can check here a short introduction on how to interact with the board using uPyCraft.
After running the previous commands, you should see a result similar to figure 1. As can be seen, the LCD attached to the board is displaying the image captured by the camera.
