Running Zero Suppression
There are two way to run the zero suppression either running one file at a time or all files (this works but also runs over the different setting tests which should be ignored):
Running over one file:
n.b. this will overwrite the root file output.root but append to the data file pix_cern_r<runNumber>_all_zs.dat
alternatively run over all file in the data dir using a python script as below:
Making Seagull plots
To make the seagull plots the ZeroSuppression code needs to be run. With the SeaGull function called as follows:
SeaGull(int seedPixRow, int seedPixCol, double CMSSubVal,int stackPosition)
This will fill the SeaGull histograms and plots can be made as below:
python -i SeaGullPlotMaker.py
Zero Suppression Code
Here is an explanation of the code used to perform the zero suppression of the data from the test beam. This is performed using the file ZeroSuppression.cc (and .h in include dir) in decal_sw/Analysis/ZeroSuppression/src/
The first job of this code is to analyse 200 events and use these to calculate the pedestals for each sensor. This is performed in the member function CalculatePedestals(ifstream&, int, int,int). This takes up to 4 input variables in the following order- input file, event type (refers to the internal/external adc and the trigger - taken from the file header), stack position ( the position of the sensor in the stack 0-6 ), max ped value ( the max number of events used to calculate the pedestals)
The first 400 events are skipped to allow stabilisation of the pedestals. Noise values are calculated as the RMS of the pedestal distribution using a reduced range, intended to remove the effect of hits. Each Pedestal distribution is written to a root file.
The pedestals and noise values are then written to the header of the output file.
The file can be reset using SuppressData.ResetFile(....) this will then run over all of the events including the first 200.
Before data can be zero suppressed a mask is applied this is performed:
Firstly an instance of SuppressData.ZeroSuppress(fin, fout,1, 1000) is called.
The options selected here mean that the next 1000 events are examined - for these events a hit level is set at 10 sigma of the pedestal. Pedestal correction and common mode correction have been applied.
Common Mode Correction
Here I will describe the process of common mode correction which is applied to the events from the test beam.
The common mode correction is applied in a function called CMS(double PedSubADC, double CMSSubADC, int stackPosition, double threshold=1.)
The first parameter is the original values from the pixels, the second parameter is the CMS subtracted values, the third is the position of the sensor in the stack and the final value is the threshold level in units of sigma.
This function calculates the mean of the 48 pixels in a row and ignores pixels greater than the threshold level ( to exclude hits ).
This is executed two times firstly removing hits at a high level (30 sigma) then at a lower level (7 sigma).
Then using SuppressData.HitMap1D() a 1D plot of the hits is made and any sensor which has hits at a level above 4.5 sigma of the mean for a sensor is added to the masked pixels.
The correlation plots are made by including FillHitCorrelation(hits,pixel,stackPosition) (commented out in the code) and running as before.
An alignment can be made using the script zalign.py
python -i scripts/zalign.py
the output from the script gives the alignment of each sensor relative to the first sensor in row and column
here you need to change the name of the root file to use different runs.
Read ZS Code
Here is a description of what the ReadZSData code doe:
The code begins by reading the data from file. There is a run header for each sensor with the following parameters:
number of headers
position in stack
There is also an event header for each sensor which contains:
evt header size
number of hits all sensors
number of hits this sensor
The data for each event is then read in. The alignement is applied by setting rowAlignment and colAlignment. While being applied the units are converted from pixels to microns.
The hits are converted to clusters using MakeCluster which searches the neighbouring 8 pixels for values greater than an inclusion (default 2 sigma).
These clusters are then sorted into order by row and checked for uniqueness any clusters sharing the same pixels choose the first one and ignore the remaining.
These lists of clusters are then combined if they fall with in a range of 100 microns. If there are 6 senors which fall in this range then they are fitted.
A chi2 is then calculated for each point on the sensor relative to the track. These are stored in an ntuple.