PopularFX
Home Help Search Login Register
Welcome,Guest. Please login or register.
2021-01-18, 13:52:16
News: Forum TIP:
The SHOUT BOX deletes messages after 3 hours. It is NOT meant to have lengthy conversations in. Use the Chat feature instead.

Pages: 1 ... 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [26] 27 28
Author Topic: Smudge proposed NMR experiment replication.  (Read 20662 times)

Group: Moderator
Hero Member
*****

Posts: 2490

No such thing as " RMS mode" in my scope as far as i can find.
I do however can increase the "record length" from the present set 10K up to 10M, but doing so creates a large file like 166M compared to the earlier 195K.

Guess i have to find a compromize between record length and aliasing.

Will start with 1M record length etc. but this file is already 15.8M big, so only accessable from my FTP i guess (also the 10M record length file (166M) is there)

I uploaded the 100K record length file (1.5M) below again as .pdf, perhaps thats already useable.


Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
No such thing as " RMS mode" in my scope as far as i can find.
What about some maximum or peak hold mode that takes the max of the last e.g. 1M samples and writes only that to the memory ?
Also regular sweep averaging might work if you trigger each sweep on a noisy trigger signal ...it will take a long time but the file size will me small.

I do however can increase the "record length" from the present set 10K up to 10M, but doing so creates a large file like 166M compared to the earlier 195K.
I could write you a small command line utility that, averages the absolute values of N samples and writes the result to a small output file.

Guess i have to find a compromize between record length and aliasing.
I don't think you can avoid aliasing this way because if you want to do a 10s 0-50MHz sweep just like before, then according to Nyguist you need a minimum of 100MSps sampling rate to avoid aliasing.
100MSps * 10s = 1G samples ...and I doubt your memory is that deep.

I uploaded the 100K record length file (1.5M) below again as .pdf, perhaps thats already useable.
I'll try to process it
   

Group: Professor
Hero Member
*****

Posts: 1913
The HP i-Probe amplifier, which I was repairing once, used the 6-pin chip AD8361 to generate its RMS output the analog way.
   

Group: Moderator
Hero Member
*****

Posts: 2490
What about some maximum or peak hold mode that takes the max of the last e.g. 1M samples and writes only that to the memory ?
Also regular sweep averaging might work if you trigger each sweep on a noisy trigger signal ...it will take a long time but the file size will me small.
I could write you a small command line utility that, averages the absolute values of N samples and writes the result to a small output file.
I don't think you can avoid aliasing this way because if you want to do a 10s 0-50MHz sweep just like before, then according to Nyguist you need a minimum of 100MSps sampling rate to avoid aliasing.
100MSps * 10s = 1G samples ...and I doubt your memory is that deep.
I'll try to process it

I used "peak detect" then the 10K samples which seems much smoother.

File sizes are the same (166K for this one), so attached below.

   

Group: Professor
Hero Member
*****

Posts: 1913
I used "peak detect" then the 10K samples which seems much smoother.
File sizes are the same (166K for this one), so attached below.
Wow, that is a weird one!
What is that frequency jump around 1s ?

Did the FG output that waveform or is it an aliasing artifact ?

   

Group: Moderator
Hero Member
*****

Posts: 2490

I guess i did not waited long enough for triggering completed.

Here a 100K record length with peak detect waiting long enough:

Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
:( See attached.
   

Group: Moderator
Hero Member
*****

Posts: 2490

Hmmm,    i used a lower FG sweep time (200ms, scope 20ms) now and this looks better to me, see graph.

Not sure what is causing this truncating? after 1s, the FG or the scope.
Changing it from .csv to .pdf and back is not causing it as the original .csv also has it.

Below attached a 10Khz to 50Mhz sweep in 200ms with 100K record length with peak detect.

Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
Triggering from a noisy signal and averaging sweeps at low kSps also averages aliasing errors and produces small filesizes.
The FG can generate noise on a 2nd channel. It can be added resistively to the trigger signal.

Look how unfortunate triggering and low rate sampling in the upper case results in sampling all zeros (blue arrows)
In the lower case, the sampling starts at a different phase and results in different sampled points.
When the maximums of these samples are averaged over multiple sweeps, then the aliasing errors can be averaged out.
All that is needed is for the trigger signal's phase to be varied (by sawtooth, triangle, sine or by noise).
   

Group: Moderator
Hero Member
*****

Posts: 2490

Ok,  but i trigger from the FG external trigger point which is fixed (square wave) as i understand.

Anyway, here a 2s sweep (scope 200ms) which shows similar at the 1s point:
   

Group: Professor
Hero Member
*****

Posts: 1913
Below attached a 10Khz to 50Mhz sweep in 200ms with 100K record length with peak detect.
And below are the first 32k samples of that sweep with their absolute values averaged out.

It looks better except for some weirdness at the beginning.

However you shortened the sweep length to 200ms to fit into memory and this is too much of the sacrifice.
   

Group: Professor
Hero Member
*****

Posts: 1913
Ok,  but i trigger from the FG external trigger point which is fixed (square wave) as i understand.
Yes, but you can resistively add a noise (or slow ramp) to that square wave so that the sampling starts several nanoseconds later (or earlier) for every sweep.
This way, when you just keep the peaks (maximums) from the multiple sweeps, the aliasing errors will disappear ...even with low sample rates and long sweep durations (still small file sizes)
   

Group: Moderator
Hero Member
*****

Posts: 2490

I added noise from my 2th FG channel via a tee to the square trigger signal in various strength.
But the artifact after 1s still remain.


Looking at the data from the csv file i see the 4th digit behind the decimal point is being ignored after the 1s mark, see red digits in the picture (data changed into "numbers")

After 0.0099 and 0.0100 (1s) it should continue with 0.0101, then 0.0102 etc. not with repeated 0.0100 then skip to 0.0110 etc. i think.
   

Group: Professor
Hero Member
*****

Posts: 1913
I added noise from my 2th FG channel via a tee to the square trigger signal in various strength.
That should have varied the starting phase of the sampling sequence.
Maybe the square wave is too steep. Try adding a small RC integrator to slow down the edges of that square trigger signal.
The only way this can not work, is if your trigger subsystem is a slave to some coarse time quanta at low sampling rates.

After 0.0099 and 0.0100 (1s) it should continue with 0.0101, then 0.0102 etc. not with repeated 0.0100 then skip to 0.0110 etc. i think.
When I look at the raw .csv file, I see all timestamps expressed in scientific notation, e.g.: "1.99998e-01".
How did you get to the purely decimal point number format ?
   

Group: Moderator
Hero Member
*****

Posts: 2490

Quote
That should have varied the starting phase of the sampling sequence.
Maybe the square wave is too steep. Try adding a small RC integrator to slow down the edges of that square trigger signal.
The only way this can not work, is if your trigger subsystem is a slave to some coarse time quanta at low sampling rates.

Ok,  i can try that

Quote
When I look at the raw .csv file, I see all timestamps expressed in scientific notation, e.g.: "1.99998e-01".
How did you get to the purely decimal point number format ?

I tried to explain with: (data changed into "numbers") above, but as i have a Dutch Excel version, i am not sure how it is named:

Go to cel properties (right mouse on A colom), then 2th from above (getal = number), then change to 4 decimals, see picture.

Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
I tried to explain with: (data changed into "numbers") above, but as i have a Dutch Excel version, i am not sure how it is named:
Go to cel properties (right mouse on A colom), then 2th from above (getal = number), then change to 4 decimals, see picture.
In my Excel it looks like this:

...but it only affects how the number is displayed.  It does not affect the underlying value.
   

Group: Moderator
Hero Member
*****

Posts: 2490

So the whole A-colum with scientific notation does not change to decimal when doing so?

I made a short video on what i did: https://www.youtube.com/watch?v=K6ZCBf43LDw

Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
So the whole A-colum with scientific notation does not change to decimal when doing so?
No, after importing external data, Excel stores the values in binary form internally.
Your video shows only the change of the display format - this change does not alter the binary values stored internally, it only changes their presentation on the display. See the attachment.

The significant digit truncation happens in the procedure which imports the data from the external csv file into Excel.
Just compare the number of significant digits in the CSV file displayed by Notepad++ with the number of significant digits imported by Excel (you can keep the scientific notation display format, to see the difference directly).

After that import is completed, all subsequent operations do not lose any information and the display format does not affect the values which Excel has stored internally.

« Last Edit: 2020-09-26, 01:12:29 by verpies »
   

Group: Moderator
Hero Member
*****

Posts: 2490

Quote
No, after importing external data, Excel stores the values in binary form internally.
Your video shows only the change of the display format - this change does not alter the binary values stored internally, it only changes their presentation on the display. See the attachment.
OK,  yes the data stays the same, only the presentation changes from scientific to in this case decimal.
So the original data from the Tek scope already has the problem after 1s.

Quote
The significant digit truncation happens in the procedure which imports the data from the external csv file into Excel.
Just compare the number of significant digits in the CSV file displayed by Notepad++ with the number of significant digits imported by Excel (you can keep the scientific notation display format, to see the difference directly).


Below screenshot is from the original file from the Tek scope opened with Notepad++
Around the 1s mark (red horizontal line) it goes from 10ms steps to 100ms steps (and repeating that step 4 times).

Excel has not touched this file yet.
   

Group: Professor
Hero Member
*****

Posts: 1913
Below screenshot is from the original file from the Tek scope opened with Notepad++
Around the 1s mark (red horizontal line) it goes from 10ms steps to 100ms steps (and repeating that step 4 times).
Excel has not touched this file yet.
You are right, the time stamp resolution is too low for this sample rate and so they repeat in the csv file.
I think a bug report is in order.
   

Group: Moderator
Hero Member
*****

Posts: 2490

OK,  but this won't be a showstopper,  Right?


Itsu
   

Group: Professor
Hero Member
*****

Posts: 1913
No, it is only an annoyance.

As far as showstoppers go, I nominate the pancake/toroid crosstalk.
   

Group: Moderator
Hero Member
*****

Posts: 2490

OK,  so what do you suggest, throw in the towel or go for a second pancake coil build to do some measurments on?
   

Group: Professor
Hero Member
*****

Posts: 1913
OK,  so what do you suggest, throw in the towel or go for a second pancake coil build to do some measurments on?
It think this pancake coil will not get any better.  Its SRF is high enough that it can be lowered with external capacitances.

So the next steps are:
1) build another pancake to create a bucking H-field when connected in parallel and to minimize the E-field occuring between the two pancake coils.
2) wind the toroidal coil with the same winding technique as the pancake coils in order to avoid circumferential induction in it (this is the major contributor to the H-field crosstalk).
3) surround the toroidal coil with a grounded E-field (only!) shield as described here in order to minimize the E-field crosstalk.

I will do the same.
« Last Edit: 2020-10-17, 01:30:46 by verpies »
   

Group: Moderator
Hero Member
*****

Posts: 2490

Great, that will keep me occupied for the rest of the winter.

Quote
1) build a mirror image pancake

So physically different (mirrored) from the first one?   
   
Pages: 1 ... 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [26] 27 28
« previous next »


 

Home Help Search Login Register
Theme © PopularFX | Based on PFX Ideas! | Scripts from iScript4u 2021-01-18, 13:52:16