org/geotools/re3data.org/re3data.orgutilities 哪个包

GeoTools学习笔记-----ShapeFile文件说明----头文件的说明DbaseFileHeader类
DBF的文件格式:
文件头DbaseFileHeader类
记录1Record
BDFHeader的详细格式:
在文件中的位置
表示当前的版本信息
表示最近的更新日期,按照YYMMDD格式。
文件中的记录条数。
文件头中的字节数。
一条记录中的字节长度。
保留字节,用于以后添加新的说明性信息时使用,这里用0来填写。
表示未完成的操作。
dBASE IV编密码标记。
保留字节,用于多用户处理时使用。
DBF文件的MDX标识。在创建一个DBF 表时 ,如果使用了MDX 格式的索引文件,那么 DBF 表的表头中的这个字节就自动被设置了一个标志,当你下次试图重新打开这个DBF表的时候,数据引擎会自动识别这个标志,如果此标志为真,则数据引擎将试图打开相应的MDX 文件。
Language driver ID.
保留字节,用于以后添加新的说明性信息时使用,这里用0来填写。
(n*32)个字节
记录项信息描述数组。n表示记录项的个数。这个数组的结构在表2.8中有详细的解释。
作为记录项终止标识。
DBF文件头中Field的详细格式:
类文件中的内部类DbaseField类
以及fields的数组
记录项名称,是ASCII码值。
记录项的数据类型,是ASCII码值。(B、C、D、G、L、M和N,具体的解释见表2.9)。
保留字节,用于以后添加新的说明性信息时使用,这里用0来填写。
记录项长度,二进制型。
记录项的精度,二进制型。
保留字节,用于以后添加新的说明性信息时使用,这里用0来填写。
工作区ID。
保留字节,用于以后添加新的说明性信息时使用,这里用0来填写。
MDX标识。如果存在一个MDX 格式的索引文件,那么这个记录项为真,否则为空。
& DBF文件中的数据类型:
允许输入的数据
各种字符。
各种字符。
用于区分年、月、日的数字和一个字符,内部存储按照YYYYMMDD格式。
(General or OLE)
各种字符。
数值型(Numeric)
-&.&0 1 2 3 4 5 6 7 8 9
逻辑型(Logical)
? Y y N n T t F f (?&表示没有初始化)。
各种字符。
举例说明:假如文件中有10条记录,每个记录有4个字段,4个字段的长度分别为:12、14、16、18,那么文件的详细格式如下:文件头占32+32*4+2个字节:前32个字节是文件头中的基本信息,32*4个字节是记录项(即字段的定义信息),最后两个字节分别是16进制的0D 和20,0D代表上文表格中说明的记录项终止标识。20代表一个空格。数据信息占(12+14+16+18)*10+1个字节:12+14+16+18个字节代表一条记录,共10条记录。最后一个字节是数据的终止表示通常是16进制表示的1A补充说明:由于上述文件格式的定义,决定了字段名称不能超过11个字节(或者5个中文字符),字符类型的字段,数据最大不能超过255个字节。在从文本、Excel、大型数据库导出数据到DBF格式文件时,一定要考虑到这些长度限制。&
以下是类文件的详细描述
/* * GeoTools - The Open Source Java GIS Toolkit * http://geotools.org * * (C) , Open Source Geospatial Foundation (OSGeo) * * This libr you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software F * version 2.1 of the License. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * This file is based on an origional contained in the GISToolkit project: * http://gistoolkit.sourceforge.net/ * -- * 张恒才程序声明 * * GPL声明 */ package org.geotools.data.shapefile. import java.io.EOFE import java.io.IOE import java.nio.ByteB import java.nio.ByteO import java.nio.channels.ReadableByteC import java.nio.channels.WritableByteC import java.util.ArrayL import java.util.C import java.util.D import java.util.L import java.util.logging.L import java.util.logging.L import org.geotools.resources.NIOU /** * * Class to represent the header of a Dbase III file. Creation date: (5/15/2001 * 5:15:30 PM) * * @source $URL: * http://svn.geotools.org/geotools/trunk/gt/modules/plugin/shapefile/src/main/java/org/geotools/data/shapefile/dbf/DbaseFileHeader.java $ *----------------------------- * * 该类代表ShapeFile的DBF的header文件 * 采用类的观点来进行表达 * 该类中Fields数组具有重要的作用 * * -- * 需要详细区分几个概念 * (1)什么是Table * (2)Field DbaseField类代表什么 * (3)Record * (4)Column row * (5)Header */ public class DbaseFileHeader { // Constant for the size of a record //该记录的常量表达头文件的大小 private static final int FILE_DESCRIPTOR_SIZE = 32; // type of the file, must be 03h //唯一的表示该文件的魔数 private static final byte MAGIC = 0x03; private static final int MINIMUM_HEADER = 33; // Date the file was last updated. //文件的更新日期 private Date date = new Date(); //记录的个数 private int recordCnt = 0; //字段的个数 private int fieldCnt = 0; // set this to a default length of 1, which is enough for one &space& // character which signifies an empty record //记录 的长度 private int recordLength = 1; // set this to a flagged value so if no fields are added before the write, // we know to adjust the headerLength to MINIMUM_HEADER private int headerLength = -1; private int largestFieldSize = 0; //日志记录 private Logger logger = org.geotools.util.logging.Logging .getLogger(&org.geotools.data.shapefile&); /** * Class for holding the information assicated with a record. * 一条记录的Field的描述 */ class DbaseField { // Field Name //字段名字 String fieldN // Field Type (C N L D or M) //字段的类型 char fieldT // Field Data Address offset from the start of the record. //一条记录开始的偏移地址 int fieldDataA // Length of the data in bytes //字段的长度 int fieldL // Field decimal count in Binary, indicating where the decimal is int decimalC } // collection of header records. // lets start out with a zero-length array, just in case //field 数组表示 //-- //重要的数组变量 private DbaseField[] fields = new DbaseField[0]; //读写文件 private void read(ByteBuffer buffer, ReadableByteChannel channel) throws IOException { while (buffer.remaining() & 0) { if (channel.read(buffer) == -1) { throw new EOFException(&Premature end of file&); } } } /** * Determine the most appropriate Java Class for representing the data in * the field. * * &PRE& * All packages are java.lang unless otherwise specified. * C (Character) -& String * N (Numeric) -& Integer or Double (depends on field's decimal count) * F (Floating) -& Double * L (Logical) -& Boolean * D (Date) -& java.util.Date * Unknown -& String * &/PRE& * * @param i * The index of the field, from 0 to * &CODE&getNumFields() - 1&/CODE& . * @return A Class which closely represents the dbase field type. */ public Class getFieldClass(int i) { Class typeClass = switch (fields[i].fieldType) { case 'C': typeClass = String. case 'N': if (fields[i].decimalCount == 0) { if (fields[i].fieldLength & 10) { typeClass = Integer. } else { typeClass = Long. } } else { typeClass = Double. } case 'F': typeClass = Double. case 'L': typeClass = Boolean. case 'D': typeClass = Date. default: typeClass = String. } return typeC } /** * Add a column to this DbaseFileHeader. The type is one of (C N L or D) * character, number, logical(true/false), or date. The Field length is the * total length in bytes reserved for this column. The decimal count only * applies to numbers(N), and floating point values (F), and refers to the * number of characters to reserve after the decimal point. &B&Don't expect * miracles from this...&/B& * * &PRE& * Field Type MaxLength * ---------- --------- * C 254 * D 8 * F 20 * N 18 * &/PRE& * * @param inFieldName * The name of the new field, must be less than 10 characters * or it gets truncated. * @param inFieldType * A character representing the dBase field, ( see above ). * Case insensitive. * @param inFieldLength * The length of the field, in bytes ( see above ) * @param inDecimalCount * For numeric fields, the number of decimal places to track. * @throws DbaseFileException * If the type is not recognized. *---------------------- *增加一个字段(也就是增加一列) */ public void addColumn(String inFieldName, char inFieldType, int inFieldLength, int inDecimalCount) throws DbaseFileException { //容错检验 if (inFieldLength &= 0) { throw new DbaseFileException(&field length &= 0&); } //第一个字段 if (fields == null) { fields = new DbaseField[0]; } //记录偏移地址 int tempLength = 1; // the length is used for the offset, and there is a // * for deleted as the first byte //临时变量 DbaseField[] tempFieldDescriptors = new DbaseField[fields.length + 1]; for (int i = 0; i & fields. i++) { fields[i].fieldDataAddress = tempL tempLength = tempLength + fields[i].fieldL tempFieldDescriptors[i] = fields[i]; } //给新的字段赋值 tempFieldDescriptors[fields.length] = new DbaseField(); tempFieldDescriptors[fields.length].fieldLength = inFieldL tempFieldDescriptors[fields.length].decimalCount = inDecimalC tempFieldDescriptors[fields.length].fieldDataAddress = tempL // set the field name String tempFieldName = inFieldN if (tempFieldName == null) { tempFieldName = &NoName&; } // Fix for GEOT-42, ArcExplorer will not handle field names & 10 chars // Sorry folks. //文件名不能超过10个字符 // 超过后去前10个字符并且给出警告 if (tempFieldName.length() & 10) { tempFieldName = tempFieldName.substring(0, 10); if (logger.isLoggable(Level.WARNING)) { logger.warning(&FieldName & + inFieldName + & is longer than 10 characters, truncating to & + tempFieldName); } } tempFieldDescriptors[fields.length].fieldName = tempFieldN // the field type if ((inFieldType == 'C') || (inFieldType == 'c')) { tempFieldDescriptors[fields.length].fieldType = 'C'; if (inFieldLength & 254) { if (logger.isLoggable(Level.FINE)) { logger .fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Which is longer than 254, not consistent with dbase III&); } } } else if ((inFieldType == 'S') || (inFieldType == 's')) { tempFieldDescriptors[fields.length].fieldType = 'C'; if (logger.isLoggable(Level.WARNING)) { logger .warning(&Field type for & + inFieldName + & set to S which is flat out wrong people!, I am setting this to C, in the hopes you meant character.&); } if (inFieldLength & 254) { if (logger.isLoggable(Level.FINE)) { logger .fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Which is longer than 254, not consistent with dbase III&); } } tempFieldDescriptors[fields.length].fieldLength = 8; } else if ((inFieldType == 'D') || (inFieldType == 'd')) { tempFieldDescriptors[fields.length].fieldType = 'D'; if (inFieldLength != 8) { if (logger.isLoggable(Level.FINE)) { logger.fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Setting to 8 digets YYYYMMDD&); } } tempFieldDescriptors[fields.length].fieldLength = 8; } else if ((inFieldType == 'F') || (inFieldType == 'f')) { tempFieldDescriptors[fields.length].fieldType = 'F'; if (inFieldLength & 20) { if (logger.isLoggable(Level.FINE)) { logger .fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Preserving length, but should be set to Max of 20 not valid for dbase IV, and UP specification, not present in dbaseIII.&); } } } else if ((inFieldType == 'N') || (inFieldType == 'n')) { tempFieldDescriptors[fields.length].fieldType = 'N'; if (inFieldLength & 18) { if (logger.isLoggable(Level.FINE)) { logger .fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Preserving length, but should be set to Max of 18 for dbase III specification.&); } } if (inDecimalCount & 0) { if (logger.isLoggable(Level.FINE)) { logger.fine(&Field Decimal Position for & + inFieldName + & set to & + inDecimalCount + & Setting to 0 no decimal data will be saved.&); } tempFieldDescriptors[fields.length].decimalCount = 0; } if (inDecimalCount & inFieldLength - 1) { if (logger.isLoggable(Level.WARNING)) { logger.warning(&Field Decimal Position for & + inFieldName + & set to & + inDecimalCount + & Setting to & + (inFieldLength - 1) + & no non decimal data will be saved.&); } tempFieldDescriptors[fields.length].decimalCount = inFieldLength - 1; } } else if ((inFieldType == 'L') || (inFieldType == 'l')) { tempFieldDescriptors[fields.length].fieldType = 'L'; if (inFieldLength != 1) { if (logger.isLoggable(Level.FINE)) { logger.fine(&Field Length for & + inFieldName + & set to & + inFieldLength + & Setting to length of 1 for logical fields.&); } } tempFieldDescriptors[fields.length].fieldLength = 1; } else { throw new DbaseFileException(&Undefined field type & + inFieldType + & For column & + inFieldName); } // the length of a record tempLength = tempLength + tempFieldDescriptors[fields.length].fieldL // set the new fields. //更新新的值 fields = tempFieldD fieldCnt = fields. headerLength = MINIMUM_HEADER + 32 * fields. recordLength = tempL//字段的个数 } /** * Remove a column from this DbaseFileHeader. * 移除 * * @todo This is really ugly, don't know who wrote it, but it needs fixin... * @param inFieldName * The name of the field, will ignore case and trim. * @return index of the removed column, -1 if no found * * ---- * 根据列的名字移除一列 * 移除一个字段 */ public int removeColumn(String inFieldName) { int retCol = -1; int tempLength = 1; DbaseField[] tempFieldDescriptors = new DbaseField[fields.length - 1]; for (int i = 0, j = 0; i & fields. i++) { if (!inFieldName.equalsIgnoreCase(fields[i].fieldName.trim())) { // if this is the last field and we still haven't found the // named field if (i == j && i == fields.length - 1) { System.err.println(&Could not find a field named '& + inFieldName + &' for removal&); return retC } tempFieldDescriptors[j] = fields[i]; tempFieldDescriptors[j].fieldDataAddress = tempL tempLength += tempFieldDescriptors[j].fieldL // only increment j on non-matching fields j++; } else { retCol = } } // set the new fields. fields = tempFieldD headerLength = 33 + 32 * fields. recordLength = tempL return retC } // Retrieve the length of the field at the given index /** * Returns the field length in bytes. * * @param inIndex * The field index. * @return The length in bytes. * --------- * 返回字段的长度 */ public int getFieldLength(int inIndex) { return fields[inIndex].fieldL } // Retrieve the location of the decimal point within the field. /** * Get the decimal count of this field. * * @param inIndex * The field index. * @return The decimal count. * ------ * 返回字段的位数 */ public int getFieldDecimalCount(int inIndex) { return fields[inIndex].decimalC } // Retrieve the Name of the field at the given index /** * Get the field name. * * @param inIndex * The field index. * @return The name of the field. * * * ------------ * 返回字段的名字 */ public String getFieldName(int inIndex) { return fields[inIndex].fieldN } // Retrieve the type of field at the given index /** * Get the character class of the field. * * @param inIndex * The field index. * @return The dbase character representing this field. * ——------- * 返回字段的类型 */ public char getFieldType(int inIndex) { return fields[inIndex].fieldT } /** * Get the date this file was last updated. * * @return The Date last modified. * -------- * 返回字段的更新日期 */ public Date getLastUpdateDate() { } /** * Return the number of fields in the records. * * @return The number of fields in this table. * * -------- * 返回字段的个数 */ public int getNumFields() { return fields. } /** * Return the number of records in the file * * @return The number of records in this table. * * * ---返回记录的个数 */ public int getNumRecords() { return recordC } /** * Get the length of the records in bytes. * * @return The number of bytes per record. * ------- * 返回记录的长度 */ public int getRecordLength() { return recordL } /** * Get the length of the header * * @return The length of the header in bytes. * ------- * 返回头文件的长度 */ public int getHeaderLength() { return headerL } /** * Read the header data from the DBF file. * * @param channel * A readable byte channel. If you have an InputStream you * need to use, you can call * java.nio.Channels.getChannel(InputStream in). * @throws IOException * If errors occur while reading. * ------------------- * * 从dbf文件读取文件头文件的说明 */ public void readHeader(ReadableByteChannel channel) throws IOException { // we'll read in chunks of 1K //1024字节 ByteBuffer in = ByteBuffer.allocateDirect(1024); // do this or GO CRAZY // ByteBuffers come preset to BIG_ENDIAN ! in.order(ByteOrder.LITTLE_ENDIAN); // only want to read first 10 bytes... in.limit(10); read(in, channel); in.position(0); // type of file. byte magic = in.get(); if (magic != MAGIC) { throw new IOException(&Unsupported DBF file Type & + Integer.toHexString(magic)); } // parse the update date information. int tempUpdateYear = in.get(); int tempUpdateMonth = in.get(); int tempUpdateDay = in.get(); // ouch Y2K uncompliant if (tempUpdateYear & 90) { tempUpdateYear = tempUpdateYear + 1900; } else { tempUpdateYear = tempUpdateYear + 2000; } Calendar c = Calendar.getInstance(); c.set(Calendar.YEAR, tempUpdateYear); c.set(Calendar.MONTH, tempUpdateMonth - 1); c.set(Calendar.DATE, tempUpdateDay); date = c.getTime(); // read the number of records. recordCnt = in.getInt(); // read the length of the header structure. // ahhh.. unsigned little-endian shorts // mask out the byte and or it with shifted 2nd byte headerLength = (in.get() & 0xff) | ((in.get() & 0xff) && 8); // if the header is bigger than our 1K, reallocate if (headerLength & in.capacity()) { NIOUtilities.clean(in); in = ByteBuffer.allocateDirect(headerLength - 10); } in.limit(headerLength - 10); in.position(0); read(in, channel); in.position(0); // read the length of a record // ahhh.. unsigned little-endian shorts recordLength = (in.get() & 0xff) | ((in.get() & 0xff) && 8); // skip / skip thesreserved bytes in the header. in.position(in.position() + 20); // calculate the number of Fields in the header fieldCnt = (headerLength - FILE_DESCRIPTOR_SIZE - 1) / FILE_DESCRIPTOR_SIZE; // read all of the header records List lfields = new ArrayList(); for (int i = 0; i & fieldC i++) { DbaseField field = new DbaseField(); // read the field name byte[] buffer = new byte[11]; in.get(buffer); String name = new String(buffer); int nullPoint = name.indexOf(0); if (nullPoint != -1) { name = name.substring(0, nullPoint); } field.fieldName = name.trim(); // read the field type field.fieldType = (char) in.get(); // read the field data address, offset from the start of the record. field.fieldDataAddress = in.getInt(); // read the field length in bytes int length = (int) in.get(); if (length & 0) { length = length + 256; } field.fieldLength = if (length & largestFieldSize) { largestFieldSize = } // read the field decimal count in bytes field.decimalCount = (int) in.get(); // rreservedvededved bytes. // in.skipBytes(14); in.position(in.position() + 14); // some broken shapefiles have 0-length attributes. The reference // implementation // (ArcExplorer 2.0, built with MapObjects) just ignores them. if (field.fieldLength & 0) { lfields.add(field); } } // Last byte is a marker for the end of the field definitions. // in.skipBytes(1); in.position(in.position() + 1); NIOUtilities.clean(in); fields = new DbaseField[lfields.size()]; fields = (DbaseField[]) lfields.toArray(fields); } /** * Get the largest field size of this table. * * @return The largt field size iiin bytes. * --------------- * 返回该表中最大的字段的大小 */ public int getLargestFieldSize() { return largestFieldS } /** * Set the number of records in the file * * @param inNumRecords * The number of records. * ----------- * 设置记录的个数 */ public void setNumRecords(int inNumRecords) { recordCnt = inNumR } /** * Write the header data to the DBF file. * * @param out * A channel to write to. If you have an OutputStream you can * obtain the correct channel by using * java.nio.Channels.newChannel(OutputStream out). * @throws IOException * If errors occur. * ----------------- * 写一个头文件 */ public void writeHeader(WritableByteChannel out) throws IOException { // take care of the annoying case where no records have been added... if (headerLength == -1) { headerLength = MINIMUM_HEADER; } ByteBuffer buffer = ByteBuffer.allocateDirect(headerLength); buffer.order(ByteOrder.LITTLE_ENDIAN); // write the output file type. buffer.put((byte) MAGIC); // write the date stuff Calendar c = Calendar.getInstance(); c.setTime(new Date()); buffer.put((byte) (c.get(Calendar.YEAR) % 100)); buffer.put((byte) (c.get(Calendar.MONTH) + 1)); buffer.put((byte) (c.get(Calendar.DAY_OF_MONTH))); // write the number of records in the datafile. buffer.putInt(recordCnt); // write the length of the header structure. buffer.putShort((short) headerLength); // write the length of a record buffer.putShort((short) recordLength); // // write the reserved bytes in the header // for (int i=0; i&20; i++) out.writeByteLE(0); buffer.position(buffer.position() + 20); // write all of the header records int tempOffset = 0; for (int i = 0; i & fields. i++) { // write the field name for (int j = 0; j & 11; j++) { if (fields[i].fieldName.length() & j) { buffer.put((byte) fields[i].fieldName.charAt(j)); } else { buffer.put((byte) 0); } } // write the field type buffer.put((byte) fields[i].fieldType); // // write the field data address, offset from the start of the // record. buffer.putInt(tempOffset); tempOffset += fields[i].fieldL // write the length of the field. buffer.put((byte) fields[i].fieldLength); // write the decimal count. buffer.put((byte) fields[i].decimalCount); // write the reserved bytes. // for (in j=0; jj&14; j++) out.writeByteLE(0); buffer.position(buffer.position() + 14); } // write the end of the field definitions marker buffer.put((byte) 0x0D); buffer.position(0); int r = buffer.remaining(); while ((r -= out.write(buffer)) & 0) { ; // do nothing } NIOUtilities.clean(buffer); } /** * Get a simple representation of this header. * * @return A String representing the state of the header. * -- * 返回该头文件包含的每个字段的描述 * Header包含字段的描述 * *程序的输出: * (1)日期: * (2)记录的个数:RecordsCNt * (3)字段的个数:FieldCnt * (4)迭代输出每个字段的描述信息 */ public String toString() { StringBuffer fs = new StringBuffer(); for (int i = 0, ii = fields. i & i++) { DbaseField f = fields[i]; fs.append(f.fieldName + & & + f.fieldType + & & + f.fieldLength + & & + f.decimalCount + & & + f.fieldDataAddress + &/n&); } return &DB3 Header/n& + &Date : & + date + &/n& + &Records : & + recordCnt + &/n& + &Fields : & + fieldCnt + &/n& + } } &
本分类共有文章60篇,更多信息详见
& 2012 - 2016 &
&All Rights Reserved. &
/*爱悠闲图+*/
var cpro_id = "u1888441";}

我要回帖

更多关于 org.geotools maven 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信