在做Asn.1文件解析时采用的是com.chaosinmotion.asn1包。但实际使用时发现不支持以流的方式读取文件,这个可不能忍受,如果解析文件小倒不要紧,但遇到超大的文件,这岂不把内存撑爆了?
利用Java流的特性,我自行实现了ByteBufferInputStream:
public class ByteBufferInputStream extends InputStream { private final int BUFSIZE = 64*1024; private volatile InputStream in; private byte[] buffer; // the number of bytes of real data in the buffer private int bufferLength = 0; // the current position in the buffer private int bufferPosn = 0; private boolean readBuf() throws IOException { if (bufferPosn >= bufferLength) { bufferPosn = 0; bufferLength = in.read(buffer); if (bufferLength <= 0) return false; // EOF } return true; } public ByteBufferInputStream(InputStream in) { this.in = in; buffer = new byte[BUFSIZE]; } public int read() throws IOException { if(!readBuf()) return -1; return buffer[bufferPosn++]; } public void close() throws IOException { if(in != null) in.close(); } }
使用方式如下:
BerInputStream in = new BerInputStream(new ByteBufferInputStream(fs.open(file)));
这样就再也不怕待解析文件过大的问题了。