Testing the Clamp Output

Connecting my video monitor to the clamp’s output results in a distorted signal (as shown on the scope) and nothing visible on the monitor. But without the monitor, the signal waveform looks normal.

With a 0.1uF capacitor, the black level and porches look “tilted”, with the left end at a slightly lower voltage than the right.

With a 4.7uF capacitor, the tilt is no longer visible. Same with a 220uF cap. But still no steady video on the monitor.

Putting a 75 ohm terminating resistor in the circuit as I would have in a real circuit, halves the amplitude of the video signal. I’m guessing 0.1uF / 75 ohms as the input stage to a 2* op amp as I plan to use would be OK.

Seems a CVBS waveform can pass through a capacitor OK, but not with enough power to drive a monitor. It needs buffering.

Clamping and Measuring

With the AD724, I measured 0mV at black level, -200mV at sync tip (using a Uzebox I have lying around). With the gameloader screen active, I got -100mV at sync tip and still 0V black. Noticed some movement up and down.

With my small PAL FPV camera, I measured around 180mV at black level, 50mV at sync tip.

With my Runcam action camera, I measured 20mv at sync tip, 180mv at black level.

In conclusion, everything I have, generates CVBS with a different DC offset. TV monitors do not care. They adapt to whatever the offset is. But if I am going to mix signals, they will both need to be clamped to a common DC offset. 0V sync tip should be fine. So I will try to make a clamp circuit using just a diode and capacitor.

Made up a clamp circuit on a breadboard using a BAT48 schottky diode and a 0.1uF ceramic cap. Observed that it did bring the sync tips to the same voltage for my 2 cameras, and that unclamped each camera’s signal had a different DC offset. However the sync tips were not at exactly 0 volts as expected but appeared to be 20-40mV below zero. This must be due to the diode voltage drop.

Now I am not sure what value capacitor to use in a real system. Many video circuits just use 0.1uF. But according to what I read, at least 220uF is needed to pass the full video bandwidth.

Revisiting the AD724

Made a design for an OSD board using a genlocked AD724. When I attempted this previously, one issue seemed to be mismatched DC levels for the input video and the OSD video. To get them matched, both signals will need to be clamped. Normally video signals that are AC coupled are clamped to 0V for black level, which means sync tips are -0.3V. So I need a circuit to do that.

Design Note DN327 from Linear Technology has a clamp circuit using a diode and a capacitor-resistor network. Not sure what DC level it actually clamps to, it is probably optimized for the amplifier in the circuit which would have sync tips at >= 0V.
Wondering if this actually matters? Assuming the next stage down from the OSD (the monitor, or the vtx) is going to be AC-coupled, the DC level is going to be stripped out anyway. As long as the black level of both signals is equal it is probably OK.

If I do clamp to a DC offset with no negative excursion, maybe I won’t need a negative supply for the amplifier and pixel switch either.

My plan for the time being is to experiment with clamp circuits before continuing with the OSD board design.

Current Draw

Using my multimeter, measured 436mA drawn before the LDOs, for the OSD circuit. Does not include MCU current.

Measured 260mA drawn after the LDOs on the 3.3V supply. Measured 70mA on the 1.8V supply. (All using the Amps scale, on the 10A unfused input).

If this is accurate, then I might be able to achieve around 360mA using a switching power supply with 80% efficiency (not including MCU current draw).

New Approach, New Ideas

Now looking at making a new board, using the AD724 clocked in sync with the incoming video signal’s colourburst. This is looking like quite a challenge. The basic principle has been shown to work, but it requires a lot of ICs. Am considering using the AD8001 as the pixel switch as it has “excellent video characteristics” according to the datasheet, which also gives a reference circuit for exactly that. However it requires a dual +-5V supply and I am not sure what is the best way to provide this. More research needed. Could it be as simple as 2 +5V LDOs, with the positive rail of one and the negative rail of the other, becoming GND?

Am also wondering how much of the 550mA used by my existing design, is due to losses by the inefficient linear voltage regulators? If I changed them to switch-mode regulators, could I bring the power down to 300mA? And could I do this while keeping noise within acceptable limits? I will try to measure the actual current drawn by the video chips from the LDO regulators. To date I have only measured total current drawn from the 5V USB power supply which includes losses from the LDOs.

Revised font system and other changes

Before attempting new hardware I wanted to finish 2018 with a better, more performant font system and tidy up other loose ends in the code. These changes are now complete. I am now storing fonts as 2bpp bitmaps which allows me to store the outline along with the character data, giving much better performance when rendering in outline mode.

Normally outlining in black is necessary around all character data as well and most graphics, because they tend to disappear against a white or light background. An OSD must be easy to read against any kind of background – you don’t want to have to nose down in order to read the altimeter because it is unreadable against the sky. Now almost everything has a black outline to avoid this problem.

I also implemented a variometer and experimented with yellow-white text and graphics.

I now have a font editor which I will be uploading to Github soon. It is very far from production quality code, simply the bare minimum necessary for me to be able to edit fonts instead of hand-coding them in raw binary as I was doing previously.

Now that these tasks are out of the way I will focus on the new hardware design. I will start by using a genlocked ADV725.

Progress, and some new ideas

With altitude, airspeed and heading indicators as well as an RTH arrow and a better attitude indicator, it is starting to look like a proper HUD. I have made the indicators white-green to reduce the effects of limited chroma bandwidth, and the Armed warning is now black on yellow for the same reason.

Progress.
Pitch ladders, heading, RTH arrow, it’s starting to look good. Pity the Armed warning is covered up, but that will be soon fixed. It’s time to get rid of the rainbow test pattern.

The attitude indicator needs a boresight, and text rendering needs some tidying up – you can see the imperfections if you look closely. Next item to implement will be a battery meter. It will use colour to show battery status, progressing from green to yellow, orange and red as the voltage drops. I will also try to implement a generic “indicator” which can be used to display any kind of parameter. It will have a text label and an optional colour swatch or icon, which can change in response to the parameter. It can be used to show things such as GPS satellite count, temperature, current and so on.

Some time in the future I would also like to have analog gauges as an alternative to HUD-style tapes. You will be able to have the traditional “6-pack” instruments if you want to.

After some discussion with airbanana at RC Groups I am going to make another attempt to generate the overlay by genlocking and pixel switching, instead of using digital decoder and encoder ASICs. If this works it should use less current (at the moment my dev board and MCU draw about 0.5A). There 2 are ways I could do this. One is the method I tried earlier, using an AD724 clocked in sync with the colour subcarrier. Another possibility is the approach used here. Rossum is generating the entire composite signal including the colourburst using the SPI port. To do that he needed to overclock the MCU to a multiple of the colourburst frequency. But the SPI port can be driven by an external clock. I can obtain a clock signal in sync with the colourburst using an MC44144 and a comparator as I did previously, and try using that to drive the SPI. It will be some time before I can try either approach as I will need to fabricate a new board. In the meantime I will continue working on the graphics.

Full Screen

I have finished porting the code over to the new STM32F413 Nucleo board and now have a frame buffer big enough to cover the whole screen. You can see the output here.

Full frame buffer
360 * 288 frame buffer, instruments in white. Notice how much better the text looks.

Word from Analog Devices regarding the chroma transient issue I mentioned earlier, is that this is a result of conversion to a YCrCb 4:2:2 digital format by the decoder. This format compresses the video data significantly, but at the expense of chroma information. Chroma is sacrificed because the human eye is much more sensitive to luma detail than chroma, so the loss is barely noticeable with a scene from ordinary television. However with coloured text and graphics it does become noticeable. From what I have observed, the effect is most noticeable with primary colours (pure red, blue or green). White or black pixels are the least affected and intermediate colours (yellow, magenta), while still affected, look better than primary colours. This makes sense when you consider that white pixels are almost entirely luma in the data stream, while primary colours contain the greatest proportion of chroma information.

For the time being I will have to live with this limitation. I will try to mitigate it by outlining text with black pixels (as most OSDs already do) and making coloured output at least 4 pixels wide (which will be 2 pixels in my half-resolution overlay).

It would make sense to render the altimeter, airspeed and AHI in white, and reserve colour for warning messages and status icons so that is what I am doing now. Ultimately this will be the sort of thing users can set according to their own preferences.

The way to avoid the chroma issue altogether would be to stream data as an RGB 4:4:4 stream which has no compression, or YCrCb 4:4:4. The ADV7341 encoder I am using understands RGB 4:4:4 but the ADV7184 cannot generate it and does not have a wide enough pixel bus.

Migrated to STM32F413

I have decided to stop developing against the STM32F407 and switch to the STM32F413, the main reason being that this part has much more on-chip SRAM- 320KB to be exact. This will be enough for a 360 * 288 double frame buffer, which will give the same resolution as the MAX7456 chip used in the MinimOSD board.

Although this is only half the resolution of an analog TV picture, it is adequate for graphics and text overlays.

I have purchased a Nucleo-F413zh board and ported the code over to it, it is working but I have not taken advantage of the extra ram to create a larger frame buffer yet. That will be the next objective.