This project started about two and half years ago when I took an FPGA class at MIT and the professor happened to give me an HP1662AS logic analyser that MIT were throwing out. Despite the fact that I lived in the UK and this thing is massive and weighs 20kg I took it anyway. I thought it would be a cool project to replace the cathode ray tube with a larger, LCD display. I thought I would get this done before I left the US three months later… Little did I know it was going to take me another two years (although not of continuous work). Now I am nearly done, and just in time to ship it back to the US when I move back for work.
The aim was simple, I wanted to remove the CRT display and replace it with an LCD. I claimed that this was because it would make it lighter, look nicer, and less likely to fail – in reality I just thought it was an interesting project and an opportunity to learn a lot (I was at least correct about this).
An extra goal was to add colour to the display, this is something I may still do in the future, but for now I wan’t to put this project to rest.
My only hard and fast rule was that the display would not come out looking worse. It must be as good, but preferably better than, the original display.
Before I got started I needed to understand exactly how the display in the HP1662AS worked. Luckily it comes from the days when manufacturers actually provided user guides and service manuals and this told me a lot of what I needed to know. The display assembly was connected by a single ribbon cable and from the service manual I learnt that the video signal was a single wire, and was accompanied by VSYNC and HSYNC signals on separate wires, with the remaining connections being power and ground. The nice thing about this is that it was very similar to VGA which also has only these connections, although it has two more video signals to give full RGB colour.
This was a good start, but I needed to know the timings and voltage levels. Using my picoscope I measured the VSYNC, HSYNC and video signals. This showed me the polarity of the VSYNC and HSYNC and also the voltage levels. Using a multimeter I could tell that the video signal was 75Ω terminated like normal VGA. The video signals ranged from 0-700mV when terminated correctly, also like VGA.
To properly use these signals I needed exact timings of the signals including pixel clock frequency, the number of rows and columns, the HSYNC/VSYNC length, and the length of the horizontal and vertical front- and back-porches. I needed to figure out where these signals were generated on the HP so that I could get the exact values. Luckily the service manual mentions that the logic analyser uses a “Brooktree BT476KPJ66 RAMDAC color palette and a National Semiconductor LM1882CM video frame generator [to] control the CRT”, which gave me a place to start.
I tracked down the LM1882CM on the mother board and measured the relevant signals. From this I learnt that the pixel frequency was 20MHz at a resolution of 576×378 and the timing parameters were:
- Total Lines = 417
- Vertical Back Porch = 11
- Vertical Front Porch = 25
- VSync = 3
- Active Lines = 378
- Total Dots = 800
- Horizontal Back Porch = 24
- Horizontal Front Porch = 136
- HSync = 64
- Active Dots = 576
With this knowledge in hand, I was ready to start building something.
I often jump straight into a PCB with my projects, hoping to get the design right on the first attempt. This, I think, may be the first time I have willingly and purposefully created a prototype PCB because I just knew I wouldn’t get it right first time. This is probably due to a combination of facts: this is the most complex project I have done, I have a bit more money now that I have a full-time job, and I’ve got it wrong enough times before.
I already had an FPGA development board with a VGA output on it so I just needed a way to get the video signal into the FPGA, and an LCD to display it on. I bought a 10.1″ 1024×600 LCD panel from buydisplay.com for around £50 with the drive electronics included. Since this decision I have stuck with a VGA interface between the FPGA and the LCD because that’s what is simplest.
Video Input Prototype
The prototype board I designed (called the Video Board) used a TVP7002 chip from Texas Instruments. This is a 3-in- 1-out- muxable RGB 10-bit ADC and includes a PLL to generate the pixel clock from HSYNC (called the horizontal PLL).
The schematic is available here.
As predicted I made a couple of mistakes. Firstly, HSYNC and VSYNC should be connected directly to the chip, rather than via C23 and C32, so I removed these. Secondly, I connected the HSYNC and VSYNC pins the wrong way round. Luckily, where C23 and C32 used to be, I was able to insert two jumpers that swapped the two. The only remaining problem was that the RC filter made by R9 and C12 was on HSYNC rather than VSYNC but it worked anyway.
I also spent a long time diagnosing a bug that turned out to be a power pin that was not correctly soldered down – these things happen.
The TVP7002 is highly configurable and is configured via an I2C bus by an ATTiny87 microcontroller which does nothing else. The input connector is at the bottom and matches the output of the logic analyser so that it could be plugged in. The video signals then pass via some buffering to the TVP which digitises the analogue signal and outputs it via the two PMOD connectors. These connectors fit my Digilent Nexys 4 DDR board which I used for the initial prototype.
Before getting bogged down in the FPGA development, I needed to make sure that the video board was outputting the correct stuff. To verify this, I connected the output of the video board to my 16-channel MSO Rohde&Schwarz RTB2004 oscilloscope and captured the data for a single frame. I then loaded the data onto my PC and using Matlab tried to reconstruct the video.
Clearly, the frame was a bit sketchy but, as I said, the TVP is highly configurable and this could be improved with correct register values e.g. gain + offset. This gave me the confidence to move ahead with the FPGA.
The first thing to do was to define the pin constraints which specify which pins on the FPGA are connected to which pins on the TVP. This was the first stumbling block. It turned out that the pin I had selected for the pixel clock was not a global clock input pin for the FPGA and couldn’t be routed efficiently inside the FPGA resulting in errors from the Xilinx Vivado Place&Route. Fortunately, the second least significant bit of the video data was connected to a global clock input and so this problem was rectified with a scalpel and bodge wire. I didn’t need that data-bit anyway as I was only interested in the four most significant bits for the prototype.
The FPGA code was broken up into two main parts. There was the RX module which received the data from the TVP, and the TX module which sent video data to the LCD (the FPGA board featured an external resistor ladder DAC for each colour so the output was just 4-bit per colour). Because the input and output resolutions and timings were different and were in entirely separate clock domains (20MHz for the RX and 48.875MHz for the TX), the RX and TX frames could not be kept in sync – the input was 59.95 frames per second while the output was 59.89FPS. Therefore a single frame buffer was required.
(Because I had to keep the case off of the logic analyser for long periods of time, I put it inside a plastic bag to protect it from dust while also letting me access the cables and buttons.)
The buffer consisted of a 1.66MBits dual port ram internal to the FPGA which is just enough to hold 8 bits per pixel for the input resolution of 576×378. The RX module wrote to the ram when it received new data and the TX module read data from the ram when it needed to. This allowed complete separation between the two clock domains.
Because the output resolution was 1024×600, some up-scaling was required. The system simply used nearest neighbour algorithm to select the value of the nearest pixel in the image. This results in a somewhat blocky image, but looks acceptable to my eye.
This was all made easier by the fact that I was using a pretty beefy FPGA. The Digilent board featured a Xilinx XC7A100T-1CSG324C chip which had 4,860 Kbits of block ram (BRAM) which was much larger than required. It also contained hardware multipliers which meant calculating the nearest neighbour was as simple as using multiplication and division.
Choosing an FPGA
With the confidence that the prototype had given me, I was ready to go ahead and design my final board. The first question was which FPGA to use – ideally I would use the same one I had used in the prototype, but realistically that chip costs over £95 and is way more powerful than necessary – it would simply be bad engineering to do so.
Ideally I would use an FPGA that was less than £10, and it had to have accessible development tools. I’d never used Altera parts before and I didn’t want to start now, Xilinx parts are on the expensive side. Lattice Semiconductor offer a range of low cost FPGAs and I am comfortable with their tools having used them for my Masters Degree project. However, at the time I couldn’t find an FPGA with enough ram (1741824 bits) to manage a whole frame buffer at a reasonable price (since then there seem to be some more reasonable options such as the Lattice LFE5U-45F-6BG256C which costs £12 but is a 256 pin BGA which is not easy to handle). I needed to investigate the minimum amount of ram I could get away with.
How much ram do I need?
I ran a Matlab simulation to show the progress of the input and output frames if they both began at the same time.
In the graph you can see that initially the two frames start at line zero, but as time progresses the input frame reaches the end of the frame faster than the output frame. However this also shows that, if both frames start at the same time, the input frame is never more than ~27 lines ahead of the output frame. Therefore, if I can keep the two frames in sync then I only need enough memory to store 28 lines of the image (~129000 bits) a significant reduction of 92%.
Keeping Frames in Sync
Until this point I had been adhering to the timings specified in the 1024×600 video standard. This meant a 48.875MHz pixel clock as well as a certain number of dots per line and lines per frame. However this was what caused the timing difference between the input and output frames. I needed to make sure that the two frames started simultaneously. The way I achieved this was effectively to insert extra lines in the vertical back porch of the output video in order to keep the frames approximately in sync. This was achieved by adding an RX_TX_SYNC wire between the RX and TX modules. The RX module would assert RX_TX_SYNC when it was on the 12th line of its frame. When the TX module was in the vertical back porch region of its scan it would remain in the back porch until the RX_TX_SYNC signal was asserted, at which point it would start the active portion of its scan at the end of the current line. The result of this was that lines were effectively added to or removed from the normal output scan as was required in order to keep the two frames roughly in sync.
I also increased the pixel clock frequency from 48.875MHz to 48.925MHz to make the frame rates match more closely.
This was effective and allowed me to reduce my frame buffer to only a 28 line buffer.
Settling on an FPGA
Finally I settled on the Lattice iCE40HX8K which comes only in 121 pin 0.8mm pitch BGA package and has 128KBit of ram which is only 1.6% more than I needed. I had never soldered a BGA package before, let alone a 121 pin BGA, nonetheless I considered this a challenge that I had to face eventually.
The final schematic can be found here.
The schematic for the final board was built upon the video board schematic from the prototype. The section for the TVP7002 video ADC and the ATTiny microcontroller is basically the same with corrections to the mistakes I made in the prototype. The output of the TVP is now fed into an iCE40HX8K FPGA instead of to a header. Finally an Analog Devices ADV7125 video DAC is controlled by the FPGA.
The schematic for the ADV section is more or less directly taken from the datasheet. Because it is such a simple chip (relatively) I was happy to dive straight in without a prototype of this chip.
Something worth noting is that I added pin connections between the ATTiny and the FPGA because I thought that it may be useful to have some kind of feedback loop so that the microcontroller could adjust the settings on the TVP if the FPGA didn’t think the video looked correct. This turned out to be incredibly useful.
I also added a debug LED and debug header which also was indispensable in getting everything working smoothly. I never used the external clock pin but I thought this could be useful for injecting a clock signal if the TVP wasn’t working properly.
I had little option other than to go for a four layer board. Simply breaking out all the traces to the FPGA would probably have been impossible otherwise. I followed Xilinx’s recommended spacing (with one mistake mentioned later) from “Recommended Design Rules and Strategies for BGA Device” and used Lattice’s “PCB Layout Recommendations for BGA Packages” to escape the connections.
These via and trace sizes were too small for my usual PCB manufacturer Seeed Studio and I ended up going back to OSH Park which actually turned out quite well, even though I’m not too keen on the purple solder mask, and the mouse-bites on the edge of the PCB annoy me.
The flow of the PCB is pretty linear. Data comes in on the connector at the bottom and enters the TVP. This data is then digitised and passed to the FPGA which then passes it onto the ADV which outputs an RGB video signal via the connector at the top.
I kept all silicon on the top side of the board and put most passives on the bottom side to save space and also to keep decoupling capacitors as close to the power pins as possible. The middle two layers were reserved for ground and power floods.
The 3.3V regulator is purposely placed exactly on the left edge of the board so that I could heatsink it well – I knew that it was going to get hot.
It is funny to me that just by going from a two layer board in the prototype to a four layer board, I managed to produce a final design that was smaller in area than the prototype board but featured significantly more components and functionality.
Overall soldering went quite smoothly. As usual for me, I soldered subsystems one-by-one, making sure each subsystem worked before populating the next subsystem. This is, of course, not the most efficient way to solder a board but it’s much easier to debug when something goes wrong if only a small number of components are added at once.
The first chip I soldered was the FPGA because I knew that this was really the only risky soldering job as the rest of the chips were QFP or SOIC of which I have hand soldered plenty before. My first efforts were using my hot air station, and three chips later with £18 down the drain I realised that I just couldn’t get enough heat from my small Quick 860DA. Begrudgingly I ordered three more and borrowed a reflow oven from a friend. The first attempt with the reflow oven was a success – but it turned out that there was a short from the 3.3V rail to GND somewhere on the PCB which, after removing all components again, the short turned out to actually be a manufacturing error on the board.
My discussions with OSHPark revealed that I had made one mistake on the board and had misinterpretted the minimum annular ring to be the minimum thickness. This meant that my design had features that OSHPark did not guarantee they could manufacture.
Luckily I had three boards, at least one of which was not shorted and soldering went fine from there.
It seems like all three major chips on the board (ADC, FPGA and DAC) each provided me with one issue that took almost exactly a week to solve in each case. I’m actually quite please with myself for managing to stick these issues through to the end and get a resolution despite the long evenings and weekends and occasional dead ends – ultimately the happiness I got from solving the issue was worth a week of stress.
iCE40 FPGA Programming Bug
Once the FPGA was soldered I was hoping it would be clear sailing from there. The programmer recognised the chip so it seemed to do something. But when it came to program the chip I experienced a really confusing bug.
The iCE40 has a DONE pin which is an open collector and is designed to have a pull-up resistor on it. This pin then goes high when programming is successful and it is possible to attach an LED to this pin so that it lights up when the chip is successfully configured (in my case it was a red LED). What I found was that after programming, the red “done” LED turned on, but the chip didn’t turn on the green LED as it was programmed to do. However, as shown in the video, I could tap randomly around on the board and eventually everything would start working.
I spent about a week of evenings trying to figure out what was going on. I tried tapping it with conductors and insulators. I tried shorting various pins to various rails. I tried monitoring power rails and reset pins to make sure they were stable. Everything was looking fine. I was tearing my hair out and running out of ideas when suddenly I remembered.
The “iCE40 Programming and Configuration” specified that the pull-up resistor on the DONE pin had a maximum acceptable value which was calculated using the capacitance of the PCB traces. When I was designing the schematic I had chosen a value of around 600Ω based on what was used on the iCE40 evaluation board because I didn’t know what the capacitance was going to be, but at some point I had bumped this up to a 2.2kΩ resistor to decrease the brightness of the LED. Switching it back to a (I think) 610Ω resistor fixed the problem.
Ultimately I think this problem occurred because Lattice do not specify why this limitation exists and what the symptoms would be. If I knew these details, perhaps I would have made the link between the symptoms and the cause much sooner. Nonetheless, this problem could have been avoided if I had put a suitable annotation on my schematic to remind me of the constraint.
ADV7125 Soldering Bug
Minus one botched soldering job, the ADV7125 video DAC seemed to work at first. I programmed the FPGA to perform a linear sweep of the three outputs (RGB) and measured it on my scope and it all seemed good.
I even got basic video working by generating test patterns – until it just broke. There was a new intermittent bug that would stop the ADV outputting any video for hours at a time before it would suddenly start working again for about 30 minutes before, once again, ceasing to work. This is basically the worst kind of bug because it is a) intermittent and b) intermittent over a large time scale. For this reason it took multiple rounds of me thinking I had fixed it before I had actually fixed it. My general theory was that it was some kind of dry joint. So I tried heating the board up with my hot air gun and cooling it down with freezer spray – neither made much difference. I tried reflowing all the joints – no real difference.
I stumbled across the solution while probing the pins of the ADV one-by-one with a multimeter. I accidentally slipped and shorted the COMP pin to the IOR output and suddenly the chip started working for a while. I tried this a few more times and it kept working – I didn’t really care if this broke the chip by this point since it didn’t work anyway. The datasheet doesn’t state what the COMP pin does exactly other than that it is a compensation pin of some kind and needs to be connected to Vcc via a capacitor. I replaced this capacitor and it resolved the issue – so I guess it was a bad capacitor because reflowing hadn’t worked, but I’ll never know for sure because I lost the capacitor immediately after.
The block diagram above shows the structure for the whole system. The flow of data goes from left to right, starting with the TVP7002 ADC and ending with the ADV7125. There is a pretty clear division between the left and right sides of the design which is bridged by the dual port ram.
The RX module contains the logic to receive the video signal from the HP and store it in RAM. The TX module has the logic to read data from RAM and output it to the ADV. There is a sync line that passes from the RX module to the TX module in order to keep the frames in sync, preventing the RAM buffer from overflowing.
The RX module is clocked directly from the TVP7002 which reconstructs the incoming pixel clock from the incoming HSYNC pulse with an internal PLL. The clock for the TX module is then generated using an internal PLL that bumps the 20MHz clock input clock up to around 49MHz. Choosing which frequency to use for the output clock was tricky and so I will dedicate a section to it.
Choosing the TX Clock Frequency
The standard clock frequency for a 1024×600 resolution image is 48.875MHz, however I had bumped this up to 48.925MHz in the prototype in order to better align the frame rate of the input and output video signals. This worked just fine and my assumption had been that I would do the same in the final build. What I had overlooked however was that the PLL inside the iCE40HX8k was not able to generate a 48.925MHz clock from a 20MHz input clock – the closest two options were 48.75MHz and 49.38MHz. While these may sound close, both of these were too far from the original such that, not only did the input and output frames not synchronise well, but the LCD display actually detected a different resolution to 1024×600 which was definitely not acceptable as I wanted to run the display at native resolution.
I tried a number of methods of getting around this. My first thought was to chain together both PLLs available to me on the FPGA to achieve the 48.925MHz frequency in two steps. In theory this works, but in reality the Place&Route stage of synthesis could not find a viable way to route this design, so that was out of the question.
It turned out that, while the PLL could not synthesise a 48.925MHz output from a 20MHz input, it could synthesise it from a 100MHz input. My thought was that I could use the TVP to generate a pixel of clock of 5x the true pixel clock by fudging the register settings and then use that 100MHz signal as the input to the FPGA. This would require sampling the video signal every 5th rising edge. I did not like this solution because it would draw more power through the already hot voltage regulator, and also because I was worried that my PCB layout was not sufficient for such a high frequency and that it may also strain the setup and hold times of the digitised video signal. Nonetheless this may have been a viable solution if I had not come up with a better one.
The solution I settled on was neater in some ways and less neat in others. Rather than make the clock faster or slower, I made the lines shorter. By cutting a few pixels out of the horizontal front-porch, back-porch and sync, I could match the output frame rate to the input frame rate while still using the 48.75MHz clock that the PLL could easily generate. This is neat because it doesn’t require anymore electrical changes, however it does result in the video signal straying further from the correct video standard. This is basically the same thing I am doing to the vertical back porch to keep the video frames in sync.
Ensuring Correct Video Sampling
The TVP7002’s job is to take the analogue video signals, reconstruct the 20MHz video clock and then use this clock to sample the video data and output a 10-bit video stream.
At first, I found that the clock signal was very jittery and the resulting video signal flickered a little. The solution was to double the frequency of the TVP’s PLL and to also set the DIV2 bit in register 0x04 which then divided the frequency by two again and improved the jitter. However I later realised that this was causing another unsightly issue with the video signal.
About half the time that the TVP7002 was powered up, the video would look crisp, while the other half of the time there would be an obvious shadow around letters with writing looking less vivid.
After a little research I realised that when the TVP divides the clock by two, there are two possible resulting clock signals 180 degrees out-of-phase with one another – each occurring 50% of the time. This meant that half the time the video was sampled in the middle of each pixel and half the time the video was sampled at the transition between pixels resulting in a smeared video signal that didn’t look good.
The first step was for the system to determine when this issue was occurring, and the second step was to update the Phase Select bits of the 0x04 register on the TVP to rectify it. This register allowed the phase of the video clock to be varied smoothly between 0 and 360 degrees. As the output signals of the TVP only went to the FPGA, the detection had to be on the FPGA. As the registers on the TVP could only be set by the ATTiny, there had to be some feedback mechanism between the FPGA and the ATTiny to coordinate this change. Luckily I had foreseen such a need and connected six of the debug pins to both the FPGA and ATTiny so that they could communicate if necessary.
I wanted to find the simplest way to tell if the error was occurring, and it turned out that if the image was smeared then the video signal would extend past the supposed end of the line. Therefore each time a line ending occurred, the FPGA simply checked if any of the top four bits of the video signal were high and if so set one of the feedback pins high. This flag was then cleared at the start of each new frame.
The ATTiny was constantly watching this feedback pin, and if it went high, would alternate the phase between two phase settings 180 degrees apart.
This was surprisingly successful to the extent that the video corrects itself before it is noticeable to the eye that anything is wrong.
Upscaling the Video
There are a number of ways to upscale video, varying in complexity and smoothness. For now I am using the simplest of them which is the nearest neighbour algorithm. This basically means showing some pixels twice and others once.
Upscaling horizontally from 578 pixels to 1024 required an upscaling ratio of 16:9. This means that for every 16 pixels that is output, 9 pixels need to be taken from the input image and therefore 7 of those pixels are shown twice and 2 of them are shown once.
Upscaling vertically from 376 lines to 600 required an upscaling ration of 100:63. This means that for every 100 lines that is output, 63 lines need to be taken from the input image and therefore 47 of those lines are shown twice and 2 of them are shown once.
The following image shows how alternating black and white lines would be displayed with this upscaling method.
This pattern was encoded an a string of bits which operated as a circular shift register. At the end of each line/pixel the system would check this shift register: a one would tell the system to move onto the next line/pixel while a zero instructed it to repeat the same line/pixel. For example the horizontal pattern was:
TVP Sync Pulse Glitch
The VGA video controller module was one of the first modules I wrote for the prototype because it was very helpful during the debugging of other modules to have a working display. It worked pretty much first time, probably because it wasn’t the first time I had written a VGA video controller. When it came to porting it to the iC40 FPGA I expected it to work straight away too, which it did… but a little while later, it stopped working.
When I looked on the ‘scope, all of the video signals looked fine, but the LCD display was not locking onto the video signal.
On closer inspection there was a tiny glitch during the HSYNC. At first I thought this was a sharp edge coupling onto the HSYNC line from the video signal, but I couldn’t find another signal making a transition at that time.
It turned out that the HSYNC signal was generated by some combinatorial logic which is not guaranteed to be glitch free, therefore, at some point there was a race condition resulting in a tiny pulse which confused the display. The solution was to put a latch on the output of the video signal so that it would only change on the clock, preventing the glitch from escaping the silicon. This is considered good practice in general.
Satisfying Hold Times
For the FPGA to receive the data from the TVP7002, the data needs to satisfy the hold time requirement. This means that, after the clock, the data needs to stay steady for some time. Frustratingly, the TVP7002 does not guarantee any hold time, as specified by t3 in the TVP7002 timing diagram.
This meant that a lot of the time the data that was received by the FPGA was a garbled mess, and it would miss the HSYNC signal resulting in confusing results.
The answer was to configure the TVP7002 to output new data on the falling edge of the clock using the CLK POL bit in register 0x18 and to clock data into the FPGA on the rising edge. That way there was a whole half-cycle of hold time. Frustratingly however, the VSYNC pulse does not seem to have any guarantee about when it changes and therefore I occasionally do see some hold time errors, this is usually resolved by re-synthesising the bitstream.
Mounting the LCD
With everything working, I needed to fit everything back into the case and put it back in working order. My original plan was to mount the LCD inside the HP where the CRT was previously. However this proved difficult because the CRT was curved in both directions while the LCD was flat. On top of this, putting the LCD inside cut off some of each side of the screen.
My solution then was to mount the LCD on the front of the HP, covering the hole that used to fit the CRT. In order to do this I needed to 3D print a frame to cover the LCD and to protect it from knocks.
I then mounted a piece of MDF inside the HP case to provide a surface on which to attach the frame. The MDF board was mounted on the same bolts as the original CRT so I did not have to modify the original case. Finally a block of MDF was glued to the back of the frame in order to screw everything together.
Initially everything worked, however it soon broke. It turned out that the screws pulling the two pieces of MDF together caused the plastic frame to bend and the glue holding it together to break. To mitigate this I remade the backing of the cover in 1.5mm aluminium sheet in order to minimise bending. I also used a glue that was intended to have some flex in it and mounted the metal backing to the MDF with countersunk screws instead of glue.
Mounting the PCB
Where the old display electronics used to sit was a detachable metal plate. This conveniently already had mounting holes so I 3D printed a cradle and mounted the electronics.
The power supplied to the board is 12V, however the highest voltage required on the board was 3.3V. In order to step down the voltage I used a linear regulator. While inefficient, the reason I did this was in order to minimise any noise that a switch mode converter may inject backwards into the system, thus degrading the noise floor of the oscilloscope.
With a voltage drop of 8.7V and a current of about 300mA the power dissipated in the TO-220 voltage regulator is around 2.5W. This is way too much to go without heat-sinking. In the prototypes I had used a small clip-on heat-sink and, while it had gotten hot, it was not so hot that it went into thermal shutdown.
Luckily the board was mounted on a large metal plate and I heat-sunk the regulator to this using a mica isolator to isolate the 12V from the metal plate. I was prepared to use thermal paste, however it turned out that this heat-sinking is plenty good enough, especially with the fans, and the device runs almost undetectably above room temperature.
Everything seems to have worked out. After two and a half years I’m happy to put this project to rest and get on with using the HP1662AS as it was intended.