After my initial tests there are two things I now know for sure. The first is I cannot treat the DAC like a programmable voltage source, it’s not going to be accurate enough and secondly, because of the INL of the DAC the resolution needs to be high enough to give me headroom to “trim” the DAC to the correct voltage. On this basis there really are only two practical options to achieve the goal.
The first option is to use a mapping table. Assuming I have a DAC with enough resolution it would be feasible to build some form of calibration rig which could step through each desired voltage then trim the code written to the DAC to get as close as possible for the resolution of the DAC and store that number in a table which can then be written to micro controller as a lookup table. The problem with this approach is its messy, and it would mean that if the calibration was lost in use it would need the calibration rig to re-calibrate it. This is not an ideal solution.
The second option, and the one I will take forward is to build a system where I have an ADC and a DAC and the system is configured so I can read the DAC output. When a given voltage it programmed the ideal DAC code is written to get within a few millivolts of the desired output; and then a software loop is used to trim the DAC code up or down as required to get as close to the required output voltage as possible. If the PSU is to measure up (no pun intended) then it needs a metering function regardless so all I need to do is design the system so I can switch the ADC’s inputs to the DAC for the calibration cycle.
The ADC needs a good resolution, it needs to be accurate and it needs to be low cost. The conversion speed is also important, if its too slow the calibration cycle would take too long to complete. I found a good part for this in the form of the LTC2402 Delta Sigma ADC device from Linear Technologies. I must say at this point that Linear Technologies design support is really great, they have been kind enough to furnish me with sample devices which I have been testing in the design. I have also tested a couple Analog Devices parts too and they were also kind enough to furnish me with sample devices, and these were very close in performance and a bit cheaper too (so not excluded just yet) but for performance in this application Linear Technologies parts so far have the edge – I will talk more about that later on and in future articles because I have tested a few different DAC’s and ADC’s. The following schematic shows the digital control circuitry which is extended to include the ADC chip and a 74HC4053D analogue switch which is used to switch the input of the ADC from monitoring the PSU regulator outputs to monitoring the outputs of the DAC for calibration read-back.
One of the interesting attributes of the ADC chip is it’s a two channel ADC, but in reality though it is actually a single channel ADC with a two channel multiplex on the front end so it samples each channel alternately. This made for an interesting problem to solve because the calibration loop needs the conversions from the ADC to be as fast as possible. The chip its self can only perform a maximum of 6.2 samples per second according to the data sheet, but because of the input multiplexer it can only actually achieve 3.1 samples per second per channel. For normal monitoring three updates per second is acceptable but for calibration I wanted better performance. I configured the analogue switch so in calibration mode I can use both inputs of the ADC to monitor a single channel of the DAC which gives me a calibration loop speed of 6.2 trim and read back per second.
Unlike the DAC, the INL of the ADC is significantly better mainly due to the nature of how it works. The architecture is a switch capacitor system which unlike laser cut resistor strings the charging and discharging of current into a capacitor is significantly more predictable and linear.
https://www.youtube.com/watch?v=f-0m5m7rUQQ
PLEASE NOTE: The LED flash is continuous but the video frame rate make it looks like it stopes and starts – which is does not.
In software I have tired two different calibration schemes. The first is the simplest, it sets the ideal code for the target voltage, measures the output of the DAC and does a calculation to determine the difference between the ideal code and the actual code required, then applies this. The second scheme is more complicated. It is the same as the first scheme but then with a second phase of micro trimming which essentially sets a window around the centre point and then adjusts the codes in steps re-reading the output; you can think of this as an electronic/software version of someone turning a variable resistor while watching a meter to get a precise voltage. The second scheme also takes more time as it needs to perform multiple read-backs in order to complete the trim cycle. I have limited the trim cycle to a maximum of eight iterations, in practice it seems to take between 2 and 7 most of the time.
(P.S. The command ‘pr’ means “property read”, it reads the EEPROM configuration of the module)
I re-tested with the spot voltages in the previous blog post and the bottom line is, 14-bits is not enough. The MCP4922 is great for the $$ but its not good enough for this project so I will not bother creating another spot voltage table for this variation, that’s a waste of time given we have established the DAC is just not up to the job. In the next article I will try out two 16-bit DAC chips, one from Analog Devices and the other from Linear Technology.
Thanks and keep watching….
Microcontroller Source CodeThis content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.
Hi Jerry,
‘Just a quick report on the LTC2402 wired in “Internal Serial Clock, Continuous Operation” mode (pg19-20 of datasheet). I was curious about the actual throughput:
– Using SPI, the LTC2402 ss the master and the PIC as a slave.
– Only SCK (w/pullup) and SDO from the ADC are needed.
– CS and Fo are pulled to GND.
– The output alternates between CH0 and CH1.
– Throughput total … ~137ms (134 ms for conversions + 1.7ms for data transfer). The is very close to the specs.
– Advantages- Very little wiring. Can use SPI interrupts continuously. Very stable.
– Disadvantage- No fun soldering. Low throughput although it looks like an external clock can be used to speed it up ~2x the way I read it.
In any case, I also ordered a AD7193 that seems faster and easier to solder but the SPI code will take more time to develop.
‘Looking forward to your next blog.
Mike
Hi Mike,
Thanks for the report, the AD7193 looks like an interesting chip. I avoided any differential input ADC’s because of the overall design goal of low component count. As far as I am aware you should be able to use it in single ended mode but you will need to disable the internal buffer because of limitations of the common mode range of the internal op-amp, that means you will likely need an external buffer which starts to get expensive when you have four input channels. You can use it in diff mode but you will need to create differential signals which either needs specialist dIff output op-amps or a floating supply for the control electronics. The 4.8K sample rate will be a lot lower resolution than at 4.7Hz because of the sigma-delta architecture, the data sheet seems to specify 15.5 bits resolution at 4.8K which is not good enough. You will have to play around and see what you can get from it. I will be interested to know what you find when you have tested it.
Next blog post on the PSU will be this weekend I hope, I have some more issues with the design, DC offset issues and overshoot problems so I almost certainly need to get down to compensation networks around the front end, error amps and driver – oh joy!
Gerry
Hi Gerry,
I like your idea of using the ADC to provide feedback for the DAC to make things nice and precise. It looks like what you ended up with is working well, but I just happened across this app note from Linear today and thought it might be interesting to you for this purpose: http://cds.linear.com/docs/en/application-note/an86f.pdf.
Hi Matt, thanks for the feedback. I have seen that app note before. I think using the micro is the simplest solution, and allows me to read back the output volts and current too. Gerry
Hi I am currently working on a 1-10amp precision dummy load, with 1ma steps. Ive also been working with a microchip DAC the MCP4922, and may end up changing DACs. Im curious you talk about micro trimming the dacs output? How would you go about doing that, I cant find and digital pot’s with a decent resolution. I need to be able to scale my DACs output via software, from 1mv, down to 500nv, 10nv. This way I can adjust the voltage I am sending in to my mosfet/opamp circuit by different factors.
The way I do it is use a high resolution ADC, read back and measure the output and tune the DAC to the right voltage. The DAC you are using will not give you the range you are looking for, the integral non-linearity is way to bad to achieve 1mA steps in a 10A range with any degree of precision, watch later parts of this project to see why. I use a 16-bit DAC and I also expand that resolution to 19 bits using software modulation, and with that setup I can get good 1mA resolution in the range I needed.
Gerry
Yes I have come to the conclusion I am either going to have to scale back my expectations, or start working with a new DAC and ADC.
I guess what I am really asking you is after you read back the DACs value using the ADC, how do you “Tune” the DACs output and shave off the error. I.E, the DAC is set to one, but its outputting 1.0017 instead of 1.000. Im not seeing any kind of hardware in your schematic to do this.
What you do is read back, adjust, read back, adjust, just thing about what you would do with a trimpot and a multimeter but do that in software.
Gerry
Yes I have realized I will have to upgrade my DAC and ADC selections or Im going to have to lower my standards. I guess what I am asking is how do you “Tune” your results. For example your using a 12bit DAC with a 4.096 vRef. When you set the DAC to 1 it outputs 1 millivolt, but due to INL/DNL it may output 1.15mv. Sure you can read that error with a higher resolution ADC, but how do you actually trim the .15mv. I looked over your schematics and didn’t see any hardware to do this.
Besides the price what do you think about this DAC
http://www.analog.com/en/digital-to-analog-converters/da-converters/ad5791/products/product.html
Is there some reason you used dithering instead of a 20 bit DAC?
Thats a question of resolution, this is why I modulated the output of the DAC to expand its effective resolution and use a low-pass filter to compensate, it works pretty well too.
Gerry
Sorry about the double post Garry, I got a little confused. Anyways thank you very much I now understand how to go about trimming the DAC output, I decided to go with the ltc2602 also, along with a 20 bit ADC.
No problem, good luck with it.
Gerry
Sorry just one last question, do you know of any websites that may have a step by step tutorial on dithering a DAC, most of what I can find is at EETimes and sites like that. Those articles are fine but there not as detailed as someone totally new to most analog design would need 🙂
Hi Robert,
I don’t know of any specific sites, but the principle is really simple, suppose you have a simple 4-bit DAC giving you 1v per step, i.e. 0-15v in 1v steps. Now suppose you want to get 3.5v out, you cant right? Not so, what you do is drive the DAC output into a low-pass filter, a simple RC will do for most cases, then in software set the code to 3 which will give you three volts right? Then set it to 4 which will give you 4v, now switch between these two at an even rate so its half the time on 3 and the other half on 4 and you will get an *average* of 3.5v after the low pass filter – easy. Now suppose you want 3.75v, thats easy too, set to 3v for 25% of the time, and 4v for 75% of the time. Thats about it for simple modulation. Things get a lot more complicated if you want to expand beyond three or four bits….
Gerry
Hi..i’m using AD5791 (DAC) and LTC2440 (ADC) for making dc voltage source..i’m not getting how to make a constant loop to get precise dac output..please help me..
Basically, you need to set your DAC voltage and then read it back with your ADC, and based on what you read trim out DAC output accordingly. This technique requires that your ADC can give you precise readings, you ADC needs to be at least as precise and accurate as your desired output accuracy.
Gerry
Thank you…can give some suggestion about the algorithm???Now i have written code for DAC n ADC.
Rashu,
Simply calculate the code for the desired voltage, set the code on the DAC, read back the output voltage via the DAC and trim up/down as required. You may need to do a number of read-backs until you reach your goal. Think about what you would do with a meter and a trimpot and simply implement that in software.
Hope that helps
Gerry