log4j and Pivot

Here’s a simple way to consume log4j messages in pivot for use in a log console or similar.

First create a custom appender which sents a log message to the pivot message bus.

public class MessageBusAppender extends AppenderSkeleton {
    
    @Override
    protected void append(LoggingEvent event) {
        MessageBus.sendMessage(new LogMessage(layout, event));
    }

    @Override
    public boolean requiresLayout() {
        return true;
    }

    @Override
    public void close() {
       //nop
    }

}

public class LogMessage {

    private Layout logLayout;
    private LoggingEvent event;

    public LoggingEvent getEvent() {
        return event;
    }

    public Layout getLogLayout() {
        return logLayout;
    }

    public LogMessage(Layout logLayout, LoggingEvent event) {
        this.logLayout = logLayout;
        this.event = event;
    }
    
}

In any component needs to display log messages, just listen for the messages and update as appropriately. Here’s an example updating a textpane:

public class LogPane extends Border {

    @BXML
    private TextPane logTxt;
    @BXML
    private PushButton clearBtn;
...
...
        logTxt.setDocument(new Document());

        MessageBus.subscribe(LogMessage.class, new MessageBusListener() {

            public void messageSent(final LogMessage message) {
                ApplicationContext.queueCallback(new Runnable() {

                    @Override
                    public void run() {
                        String text = message.getLogLayout().format(message.getEvent());
                        logTxt.getDocument().add(new Paragraph(text));
                        if (message.getEvent().getThrowableInformation() != null)
                        {
                            StringBuilder sb = new StringBuilder();
                            for (String s : message.getEvent().getThrowableInformation().getThrowableStrRep())
                            {
                                sb.append("  ");
                                sb.append(s);
                                sb.append("\n");
                            }
                            logTxt.getDocument().add(new Paragraph(sb.toString()));
                        }
                    }
                });
            }
        });

        clearBtn.getButtonPressListeners().add(new ButtonPressListener() {

            public void buttonPressed(Button button) {
                logTxt.setDocument(new Document());

            }
        });
 ...
 ...
}

Now tie it together in your log4j config:

log4j.rootLogger=ERROR, Pivot

log4j.appender.Pivot=sample.MessageBusAppender
log4j.appender.Pivot.layout=org.apache.log4j.PatternLayout
log4j.appender.Pivot.layout.ConversionPattern=%-6r [%15.15t] %-5p %30.30c %x - %m%n

Counting bits

Here’s a simple input stream which will tell you the current position. You should insert this after any buffered input streams if you want an accurate position after a read.

public class CountingInputStream extends PushbackInputStream {

    private long bytesRead;

    public CountingInputStream(InputStream is) {
        super(is);
    }
    public CountingInputStream(InputStream is,int len) {
        super(is,len);
    }

    public long getPosition() {
        return bytesRead;
    }

    @Override
    public void unread(byte[] b, int off, int len) throws IOException {
        bytesRead -= len;
        super.unread(b, off, len);
    }

    @Override
    public void unread(int b) throws IOException {
        bytesRead--;
        super.unread(b);
    }

    @Override
    public void unread(byte[] b) throws IOException {
        bytesRead -= b.length;
        super.unread(b);
    }

    @Override
    public boolean markSupported() {
        return false;
    }

    @Override
    public synchronized void reset() throws IOException {
        throw new IOException("Mark not supported");
    }

    @Override
    public int read() throws IOException {
        int rd = super.read();
        if (rd != -1) {
            bytesRead++;
        }
        return rd;
    }

    @Override
    public int read(byte[] b) throws IOException {
        int read = super.read(b);
        if (read != -1) {
            bytesRead += read;
        }
        return read;
    }

    @Override
    public int read(byte[] b, int off, int len) throws IOException {
        int read = super.read(b, off, len);
        if (read != -1) {
            bytesRead += read;
        }
        return read;
    }

    @Override
    public long skip(long n) throws IOException {
        long skipped = super.skip(n);
        bytesRead += skipped;
        return skipped;
    }
}

GZip pushback input stream

When dealing w/ Java’s GZIPInputStream, it has an annoying habit of eating bits past the end of the gzip trailer. Normally when reading a file, this isn’t a problem, but when reading concatenated GZIP’d files or gzip data embedded in other inputstreams, you would really like to have those overread bits back. Using this inputstream, you can.

public class GzipPushbackStream extends GZIPInputStream {

    private boolean pushed = false;
    private static final int TRAILER_LEN = 8; // 8 byte tailer

    public GzipPushbackStream(PushbackInputStream pis) throws IOException {
        super(pis);
    }

    @Override
    public int read(byte[] buf, int off, int len) throws IOException {
        int read = super.read(buf, off, len);
        if (eos && !pushed) {
            int n = inf.getRemaining();
            if (n > TRAILER_LEN) {
                int offset = super.len - (n -TRAILER_LEN);
                ((PushbackInputStream) in).unread(super.buf, offset, n - TRAILER_LEN);
            }
            pushed = true;
        }
        return read;
    }
}

Java digesting performance

For the ACE audit manager, reading and generating digests on files is slowest part of auditing. In previous (1.6 and lower) versions of ACE digesting was done in a simple, read/update digest loop using Java’s DigestInputStream. This appeared to work good enough, however I wanted to see what effect large blocks have on this model. When reading from remote resources, we up the block size from a standard 4-32k to 1MG.

Using that model, here’s how performance looked for a variety of block sized, both aligned and unaligned. The following table compares both only reading and reading then updating. The test data ~16G of large (27MG) files on a SATA disk, running on a machine w/ 8G ram and an Intel Q9550 @2.83Ghz. The digest algorithm was SHA-256, using Sun’s provider.

Block size read digest only read+digest
4096 96.5 MB/s 69.6 MB/s 53.1 MB/s
8192 95.9 MB/s 69.6 MB/s 54.8 MB/s
10000 94.6 MB/s 69.2 MB/s 54.0 MB/s
1048576 97.8 MB/s 72.1 MB/s 34.3 MB/s

The interesting take-away is how much worse large block preforms when you follow the read then process model.

The next step is to change the model and split the digesting and reading into separate threads. The new model uses two threads and two queues. The two threads exchange a pre-allocated set of byte arrays using an empty-array queue and a filled-array queue. These queues were Java LinkedBlockingQueues.

Read Thread:

  1. Pop byte buffer from empty-array queue
  2. Read into the byte array.
  3. Push buffer into filled-array queue.

Digest Thread:

  1. Pop byte buffer from filled-array queue
  2. Update digest using byte array.
  3. Push buffer into empty-array queue.

On the following test runs, I used 5 byte buffers of the above sizes to see how performance would vary.

Block size read digest only read+digest
4096 92.2 MB/s 65.63 MB/s 54.7 MB/s
8192 92.4 MB/s 67.5 MB/s 57.0 MB/s
10000 87.8 MS/s 68.0 MB/s 57.2 MB/s
1048576 97.4 MB/s 72.2 MB/s 64.7 MB/s

Small block performance is pretty much unchanged, however on large block, large file there is a substantial speedup running at almost 90% of the possible digest speed vs 48% possible speed. The next version of ACE will switch to this method of data reading.

ACE 1.6 released

ACE 1.6 has been released. Among it’s notable features are:

  • Support for exporting token stores.
  • New maven-based build system, for easier 3rd party collaboration
  • checkm manifest exporting
  • mproved database storage of tokens.
  • Open subversion repository, and nightly builds.

Download here: ACE Download

A complete list of all improvements and bug fixes is available at:
Changelog

When upgrading to 1.6, ACE will attempt to migrate all existing tokens from
the older table layout to the newer. This means, the first time you run 1.6,
it may take several minutes to perform this migration. The web interface will
be unavailable during this migration. You can follow the status of the migration
by looking at the ace log file, located in /tmp/aceam.log.