area detector data?

Ray Osborn ROsborn at anl.gov
Thu Dec 2 19:58:18 GMT 1999


on 99/12/2 2:24 PM, Brian Tieman at tieman at aps.anl.gov wrote:

> 
> While reading this, please keep in mind that my view point was always to fit
> what we could into the NeXus spec as it was already defined and then to lobby
> for the rest of the needed info as needed.  This has, however, become such a
> long drawn out battle in this group that I have thrown up my hands in the
> hopes of getting some actual work done.  The library I have developed in no
> way prevents users from writing NeXus standard files--but it also allows them
> to do what they want without constantly pestering me for changes.


Brian,
Thanks for the reply.  I certainly appreciate the difficulties of adhering
to a standard when there are many competing viewpoints (cf HTML).  How to
maintain a standard's integrity while allowing it to develop is an
organisational problem that we haven't yet solved.  We will be making
proposals on that front, I hope shortly.  In the meantime, please tell your
colleagues that the best way of getting some attention for what they want is
to write to this list as you have.

> 
> Here is a sample of a couple of the groups the CMT guys have defined.  At one
> point I asked them specifically to try and get the NeXus group to take a look
> at what they wanted to do and see if things could reasonably be incorperated.
> My understanding is that someone from the NeXus group was contacted (I thought
> it was you, even, although I'm really not sure.) about these additions, and
> that they were turned down.

I'm not the most reliable responder to e-mails.  However, I'm sure I never
turned anyone down - certainly not intentionally.

> Anyway, here's a couple of sample groups:
> 
> data_attribute--group
>    black_field--the name of this group and what is in it differ based on the
> type of data stored in this file
>       name--field containing file name
>       description--field containing type of data--matches group name in all
> cases I know about
>       data_file_index--index number for this file
>       data_type--integer expressing data type--interestingly enough this
> matched NX_INT16, etc...
>       data_dimensions--number of dimensions
>       n_i_pixels--x axis size
>       n_j_pixels--y_axis size
>       integration time--length of integration time
>    NeXus_API_version--version of napi used
>    experiment_file_name--name of HDF file containing groups common to all
> images
> 
> data_array--group
>    image_data--the actual 2D data
> 
> I don't have ready access to the definition of some of the other groups, but
> suffice it to say that the document describing them is 100 pages long.  This
> looks so much unlike what I think NeXus is about that I refuse to call the
> files
> NeXus files.  They're really HDF files with a specific format for Computed
> Micro-Tomography.
> 

Actually, this doesn't look that difficult to fit into the NeXus scheme.
The following is a NeXus-compliant file (group classes in parentheses) :

black_field (NXentry)
   name
   description
   data_file_index
   integration_time
   experiment_file_name
   data (NXdata)
      image_data

Note that the actual file name and NeXus version number are automatically
stored as global attributes by the NeXus API.  The data type and pixel
numbers are redundant because you can get that information by doing a call
to NXgetinfo after opening image_data.  There is really no need to store
such information twice, although you can add them to the black_field group
if you want to.

> 
> I have had more success in making other types of data more NeXus
> like--however, most people I work for/with don't have a good grasp of what
> NeXus is/does (I'm not even sure my grasp is as good as it should be).  So,
> I'm content to write HDF files--for the moment at least...
> 
> 

This is a serious problem for me.  I have tried to make the NeXus web pages
as comprehensible as possible, but I am too close to them to appreciate the
difficulties outsiders have.  I would really appreciate feedback on what
parts need improving.  I know that one suggestion is to make more NeXus
files available as examples.

>> 
>> HDF 4.1r3 allows for internal data compression of datasets using a variety
>> of algorithms.  We have not yet implemented any of them because it had not
>> yet got to the top of the priority list.  If this is critical for any
>> particular user, we can see if it can be moved up.  I don't think it's that
>> difficult, unless the performance penalty is significant.
>> 
> 
> I'd really like to see this done.  Most of our images are 1024x1024x2bytes or
> ~ 2MB.  A complete data set may contain close to 1000 of these images.
> Plus, the hope is to go to 2048x2048x2bytes cameras within the next year or
> so.  That's an awful lot of data.  As long as the file compression doesn't
> bottle neck the file saving too much (we can currently save and image at ~1HZ)
> it's an option I'd like to make use of.
> 

Never say we're not responsive.  Following that post, Mark Koennecke has
already produced a version of the API which includes data compression.  We
are testing it now and it looks as if it works very well.  I hope we can
officially release it soon.

Regards,
Ray
-- 
Dr Ray Osborn                Tel: +1 (630) 252-9011
Materials Science Division   Fax: +1 (630) 252-7777
Argonne National Laboratory  E-mail: ROsborn at anl.gov
Argonne, IL 60439-4845





More information about the NeXus-developers mailing list