|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectncsa.hdf.object.HObject
ncsa.hdf.object.Dataset
ncsa.hdf.object.CompoundDS
ncsa.hdf.object.h5.H5CompoundDS
public class H5CompoundDS
The H5CompoundDS class defines an HDF5 dataset of compound datatypes.
An HDF5 dataset is an object composed of a collection of data elements, or raw data, and metadata that stores a description of the data elements, data layout, and all other information necessary to write, read, and interpret the stored data.
A HDF5 compound datatype is similar to a struct in C or a common block in Fortran: it is a collection of one or more atomic types or small arrays of such types. Each member of a compound type has a name which is unique within that type, and a byte offset that determines the first byte (smallest byte address) of that member in a compound datum.
For more information on HDF5 datasets and datatypes, read the HDF5 User's Guide.
There are two basic types of compound datasets: simple compound data and nested compound data. Members of a simple compound dataset have atomic datatyes. Members of a nested compound dataset are compound or array of compound data.
Since Java does not understand C structures, we cannot directly read/write compound data values as in the following C example.
typedef struct s1_t { int a; float b; double c; } s1_t; s1_t s1[LENGTH]; ... H5Dwrite(..., s1); H5Dread(..., s1);Values of compound data fields are stored in java.util.Vector object. We read and write compound data by fields instead of compound structure. As for the example above, the java.util.Vector object has three elements: int[LENGTH], float[LENGTH] and double[LENGTH]. Since Java understands the primitive datatypes of int, float and double, we will be able to read/write the compound data by field.
Field Summary | |
---|---|
static long |
serialVersionUID
|
Fields inherited from class ncsa.hdf.object.CompoundDS |
---|
separator |
Constructor Summary | |
---|---|
H5CompoundDS(FileFormat theFile,
java.lang.String name,
java.lang.String path)
Constructs an HDF5 compound dataset with given file, dataset name and path. |
|
H5CompoundDS(FileFormat theFile,
java.lang.String name,
java.lang.String path,
long[] oid)
Deprecated. Not for public use in the future. Using H5CompoundDS(FileFormat, String, String) |
Method Summary | |
---|---|
void |
clear()
Clears memory held by the dataset, such as data buffer. |
void |
close(int did)
Closes access to the object. |
static Dataset |
create(java.lang.String name,
Group pgroup,
long[] dims,
long[] maxdims,
long[] chunks,
int gzip,
java.lang.String[] memberNames,
Datatype[] memberDatatypes,
int[] memberRanks,
long[][] memberDims,
java.lang.Object data)
Creates a simple compound dataset in a file with/without chunking and compression |
static Dataset |
create(java.lang.String name,
Group pgroup,
long[] dims,
java.lang.String[] memberNames,
Datatype[] memberDatatypes,
int[] memberRanks,
long[][] memberDims,
java.lang.Object data)
Deprecated. Not for public use in the future. Using #create(String, Group, long[], long[], long[], int, String[], Datatype[], int[], int[][], Object) |
static Dataset |
create(java.lang.String name,
Group pgroup,
long[] dims,
java.lang.String[] memberNames,
Datatype[] memberDatatypes,
int[] memberSizes,
java.lang.Object data)
Deprecated. Not for public use in the future. Using #create(String, Group, long[], long[], long[], int, String[], Datatype[], int[], int[][], Object) |
Datatype |
getDatatype()
Returns the datatype object of the dataset. |
java.util.List |
getMetadata()
Retrieves the metadata such as attributes from file. |
java.util.List |
getMetadata(int... attrPropList)
|
int |
getSize(int tid)
Returns the size in bytes of a given datatype. |
boolean |
hasAttribute()
Check if the object has any attributes attached. |
void |
init()
Retrieves datatype and dataspace information from file and sets the dataset in memory. |
boolean |
isString(int tid)
Checks if a given datatype is a string. |
int |
open()
Opens an existing object such as dataset or group for access. |
java.lang.Object |
read()
Reads the data from file. |
byte[] |
readBytes()
Reads the raw data of the dataset from file to a byte array. |
void |
removeMetadata(java.lang.Object info)
Deletes an existing metadata from this data object. |
void |
setName(java.lang.String newName)
Sets the name of the object. |
void |
write(java.lang.Object buf)
Writes the given data buffer into this dataset in a file. |
void |
writeMetadata(java.lang.Object info)
Writes a specific metadata (such as attribute) into file. |
Methods inherited from class ncsa.hdf.object.CompoundDS |
---|
copy, getMemberCount, getMemberNames, getMemberOrders, getMemberTypes, getMemeberDims, getSelectedMemberCount, getSelectedMemberOrders, getSelectedMemberTypes, isMemberSelected, selectMember, setMemberSelection |
Methods inherited from class ncsa.hdf.object.Dataset |
---|
byteToString, clearData, convertFromUnsignedC, convertFromUnsignedC, convertToUnsignedC, convertToUnsignedC, getChunkSize, getCompression, getConvertByteToString, getData, getDimNames, getDims, getHeight, getMaxDims, getRank, getSelectedDims, getSelectedIndex, getStartDims, getStride, getWidth, isEnumConverted, setConvertByteToString, setData, setEnumConverted, stringToByte, write |
Methods inherited from class ncsa.hdf.object.HObject |
---|
equalsOID, getFID, getFile, getFileFormat, getFullName, getLinkTargetObjName, getName, getOID, getPath, setLinkTargetObjName, setPath, toString |
Methods inherited from class java.lang.Object |
---|
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final long serialVersionUID
HObject.serialVersionUID
,
Constant Field ValuesConstructor Detail |
---|
public H5CompoundDS(FileFormat theFile, java.lang.String name, java.lang.String path)
The dataset object represents an existing dataset in the file. For example, new H5CompoundDS(file, "dset1", "/g0/") constructs a dataset object that corresponds to the dataset,"dset1", at group "/g0/".
This object is usually constructed at FileFormat.open(), which loads the file structure and object informatoin into tree structure (TreeNode). It is rarely used elsewhere.
theFile
- the file that contains the dataset.name
- the name of the CompoundDS, e.g. "compDS".path
- the path of the CompoundDS, e.g. "/g1".@Deprecated public H5CompoundDS(FileFormat theFile, java.lang.String name, java.lang.String path, long[] oid)
H5CompoundDS(FileFormat, String, String)
Method Detail |
---|
public boolean hasAttribute()
DataFormat
public Datatype getDatatype()
Dataset
getDatatype
in class Dataset
public void clear()
Dataset
clear
in class Dataset
public byte[] readBytes() throws HDF5Exception
Dataset
readBytes() reads raw data to an array of bytes instead of array of its datatype. For example, for an one-dimension 32-bit integer dataset of size 5, the readBytes() returns of a byte array of size 20 instead of an int array of 5.
readBytes() can be used to copy data from one dataset to another efficiently because the raw data is not converted to its native type, it saves memory space and CPU time.
readBytes
in class Dataset
HDF5Exception
public java.lang.Object read() throws HDF5Exception
Dataset
read() reads the data from file to a memory buffer and returns the memory buffer. The dataset object does not hold the memobry buffer. To store the memory buffer in the dataset object, one must call getData().
By default, the whole dataset is read into memory. Users can also select subset to read. Subsetting is done in an implicit way.
How to Select a Subset
A selection is specified by three arrays: start, stride and count.
The following example shows how to make a subset. In the example, the
dataset is a 4-dimensional array of [200][100][50][10], i.e. dims[0]=200;
dims[1]=100; dims[2]=50; dims[3]=10;
We want to select every other data point in dims[1] and dims[2]
int rank = dataset.getRank(); // number of dimension of the dataset long[] dims = dataset.getDims(); // the dimension sizes of the dataset long[] selected = dataset.getSelectedDims(); // the selected size of the dataet long[] start = dataset.getStartDims(); // the off set of the selection long[] stride = dataset.getStride(); // the stride of the dataset int[] selectedIndex = dataset.getSelectedIndex(); // the selected dimensions for display // select dim1 and dim2 as 2D data for display,and slice through dim0 selectedIndex[0] = 1; selectedIndex[1] = 2; selectedIndex[1] = 0; // reset the selection arrays for (int i = 0; i < rank; i++) { start[i] = 0; selected[i] = 1; stride[i] = 1; } // set stride to 2 on dim1 and dim2 so that every other data points are selected. stride[1] = 2; stride[2] = 2; // set the selection size of dim1 and dim2 selected[1] = dims[1] / stride[1]; selected[2] = dims[1] / stride[2]; // when dataset.getData() is called, the slection above will be used since // the dimension arrays are passed by reference. Changes of these arrays // outside the dataset object directly change the values of these array // in the dataset object.
For ScalarDS, the memory data buffer is an one-dimensional array of byte, short, int, float, double or String type based on the datatype of the dataset.
For CompoundDS, the meory data object is an java.util.List object. Each element of the list is a data array that corresponds to a compound field.
For example, if compound dataset "comp" has the following nested structure, and memeber datatypes
comp --> m01 (int) comp --> m02 (float) comp --> nest1 --> m11 (char) comp --> nest1 --> m12 (String) comp --> nest1 --> nest2 --> m21 (long) comp --> nest1 --> nest2 --> m22 (double)getData() returns a list of six arrays: {int[], float[], char[], Stirng[], long[] and double[]}.
read
in class Dataset
HDF5Exception
#getData()}
public void write(java.lang.Object buf) throws HDF5Exception
The data buffer is a vector that contains the data values of compound fields. The data is written into file field by field.
write
in class Dataset
buf
- The vector that contains the data values of compound fields.
HDF5Exception
public java.util.List getMetadata() throws HDF5Exception
DataFormat
Metadata such as attributes are stored in a List.
HDF5Exception
public java.util.List getMetadata(int... attrPropList) throws HDF5Exception
HDF5Exception
public void writeMetadata(java.lang.Object info) throws java.lang.Exception
DataFormat
If an HDF(4&5) attribute exists in file, the method updates its value. If the attribute does not exists in file, it creates the attribute in file and attaches it to the object. It will fail to write a new attribute to the object where an attribute with the same name already exists. To update the value of an existing attribute in file, one needs to get the instance of the attribute by getMetadata(), change its values, and use writeMetadata() to write the value.
info
- the metadata to write.
java.lang.Exception
public void removeMetadata(java.lang.Object info) throws HDF5Exception
DataFormat
info
- the metadata to delete.
HDF5Exception
public int open()
HObject
open
in class HObject
HObject.close(int)
public void close(int did)
HObject
Sub-classes must implement this interface because different data objects have their own ways of how the data resources are closed.
For example, H5Group.close() calls the ncsa.hdf.hdf5lib.H5.H5Gclose() method and closes the group resource specified by the group id.
close
in class HObject
did
- The object identifier.public void init()
Dataset
The init() is designed to support lazy operation in dataset object. When a data object is retrieved from file, the datatype, dataspace and raw data are not loaded into memory. When it is asked to read the raw data from file, init() is first called to get the datatype and dataspace information, then load the raw data from file.
init() is also used to reset selection of a dataset (start, stride and count) to the default, which is the entire dataset for 1D or 2D datasets. In the following example, init() at step 1) retrieve datatype and dataspace information from file. getData() at step 3) read only one data point. init() at step 4) reset the selection to the whole dataset. getData() at step 4) reads the values of whole dataset into memory.
dset = (Dataset) file.get(NAME_DATASET); // 1) get datatype and dataspace information from file dset.init(); rank = dset.getRank(); // rank = 2, a 2D dataset count = dset.getSelectedDims(); start = dset.getStartDims(); dims = dset.getDims(); // 2) select only one data point for (int i = 0; i < rank; i++) { start[0] = 0; count[i] = 1; } // 3) read one data point data = dset.getData(); // 4) reset to select the whole dataset dset.init(); // 5) clean the memory data buffer dset.clearData(); // 6) Read the whole dataset data = dset.getData();
init
in class Dataset
public void setName(java.lang.String newName) throws java.lang.Exception
HObject
setName (String newName) changes the name of the object in the file.
setName
in class HObject
newName
- The new name of the object.
java.lang.Exception
@Deprecated public static Dataset create(java.lang.String name, Group pgroup, long[] dims, java.lang.String[] memberNames, Datatype[] memberDatatypes, int[] memberSizes, java.lang.Object data) throws java.lang.Exception
#create(String, Group, long[], long[], long[], int, String[], Datatype[], int[], int[][], Object)
java.lang.Exception
@Deprecated public static Dataset create(java.lang.String name, Group pgroup, long[] dims, java.lang.String[] memberNames, Datatype[] memberDatatypes, int[] memberRanks, long[][] memberDims, java.lang.Object data) throws java.lang.Exception
#create(String, Group, long[], long[], long[], int, String[], Datatype[], int[], int[][], Object)
java.lang.Exception
public static Dataset create(java.lang.String name, Group pgroup, long[] dims, long[] maxdims, long[] chunks, int gzip, java.lang.String[] memberNames, Datatype[] memberDatatypes, int[] memberRanks, long[][] memberDims, java.lang.Object data) throws java.lang.Exception
This function provides an easy way to create a simple compound dataset in file by hiding tedious details of creating a compound dataset from users.
This functoin calls H5.H5Dcreate() to create a simple compound dataset in file. Nested compound dataset is not supported. The required information to create a compound dataset includes the name, the parent group and data space of the dataset, the names, datatypes and data spaces of the compound fields. Other information such as chunks, compression and the data buffer is optional.
The following example shows how to use this function to create a compound dataset in file.
H5File file = null; String message = ""; Group pgroup = null; int[] DATA_INT = new int[DIM_SIZE]; float[] DATA_FLOAT = new float[DIM_SIZE]; String[] DATA_STR = new String[DIM_SIZE]; long[] DIMs = { 50, 10 }; long[] CHUNKs = { 25, 5 }; try { file = (H5File) H5FILE.open(fname, H5File.CREATE); file.open(); pgroup = (Group) file.get("/"); } catch (Exception ex) { } Vector data = new Vector(); data.add(0, DATA_INT); data.add(1, DATA_FLOAT); data.add(2, DATA_STR); // create groups Datatype[] mdtypes = new H5Datatype[3]; String[] mnames = { "int", "float", "string" }; Dataset dset = null; try { mdtypes[0] = new H5Datatype(Datatype.CLASS_INTEGER, 4, -1, -1); mdtypes[1] = new H5Datatype(Datatype.CLASS_FLOAT, 4, -1, -1); mdtypes[2] = new H5Datatype(Datatype.CLASS_STRING, STR_LEN, -1, -1); dset = file.createCompoundDS("/CompoundDS", pgroup, DIMs, null, CHUNKs, 9, mnames, mdtypes, null, data); } catch (Exception ex) { failed(message, ex, file); return 1; }
name
- the name of the new datasetpgroup
- parent group where the new dataset is created.dims
- the dimension sizemaxdims
- maximum dimension sizes of the new dataset, null if maxdims is
the same as dims.chunks
- chunk sizes of the new dataset, null if no chunkinggzip
- GZIP compression level (1 to 9), 0 or negative values if no
compression.memberNames
- the names of compound datatypememberDatatypes
- the datatypes of the compound datatypememberRanks
- the ranks of the membersmemberDims
- the dim sizes of the membersdata
- list of data arrays written to the new dataset, null if no
data is written to the new dataset.
java.lang.Exception
public boolean isString(int tid)
Dataset
isString
in class Dataset
tid
- The data type identifier.
public int getSize(int tid)
Dataset
getSize
in class Dataset
tid
- The data type identifier.
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |