The Apache Pivot project released version 1.5. download.
Category: Uncategorized
Head bashing
When working with smartclient and trying to get pages to render on IE, you may see a lot of random “‘something’ is null or not an object” messages. This is caused by a dangling , in your list of items. Most times, look at the members or fields array for the little beastie hanging around at the end.
J2Weeeeeeeeeee
As strongbad would say, for good or for awesome…
@Path("/orders/") @Interceptors(CallAudit.class) @Stateless public class OrderService { @EJB BillingService billing; @EJB DeliveryService delivery; @EJB Warehouse warehouse; @PUT @Produces({"application/xml","application/json"}) @Consumes({"application/xml","application/json"}) public Order order(Order newOrder){ Order order = warehouse.checkout(newOrder); billing.payForOrder(order); delivery.deliver(order); return order; } @GET @Path("{orderid}/") @Produces({"application/xml","application/json"}) public Order status(@PathParam("orderid") long orderId){ return delivery.status(orderId); } }
Full details here
Storage Success
Our web-based storage system was able to completely saturate the gigabit link to a compute cluster. The serving nodes were two Sun x2100’s, each with an older Apple XRAID attached. All access was through the grizzly based webserver. Separate testing was able to show this system could handle between 1200 and 2200 small file requests/s per node.
Being Evil
Someone was using SiteSucker on a demo site and was behaving badly, this site also hosts our redmine install on the same DB and started to slow down my work. To block we had a few options, iptables the IP, block the IP w/in apache (htaccess) or do the evil thing, just block based on their user-agent. I chose the latter.
RewriteEngine on ... RewriteCond %{HTTP_USER_AGENT} ^SiteSucker2.3.1 RewriteRule ^(.*) /badbot.html
This is evil since the persons crawler will get out badbot.html page. The person will then use firefox/ie/whatever to browse to the page and look to see why it’s not working, but since it’s sending a differend user-agent, they will be allowed to browse. For anyone who knows how to configure a crawler it isn’t an issue changing the supplied agent, but then again that person will likely be able to control their crawler and not kill my web server. Here’s a snapshot of the logs showing the crawler pulling badbot (size 174) followed by a browse attempt from safari.
82.232.60.250 - - [09/Mar/2010:19:52:41 -0500] "GET /cgi-bin/isadg/viewitem.pl?item=14379 HTTP/1.1" 200 174 "http://narademo.umiacs.umd.edu/cgi-bin/isadg/viewseries.pl?seriesid=1054" "SiteSucker2.3.1 CFNetwork/438.14 Darwin/9.8.0 (i386) (MacBook1%2C1)" 82.232.60.250 - - [09/Mar/2010:19:52:42 -0500] "GET /cgi-bin/isadg/viewitem.pl?item=14377 HTTP/1.1" 200 174 "http://narademo.umiacs.umd.edu/cgi-bin/isadg/viewseries.pl?seriesid=1054" "SiteSucker2.3.1 CFNetwork/438.14 Darwin/9.8.0 (i386) (MacBook1%2C1)" 82.232.60.250 - - [10/Mar/2010:11:03:13 -0500] "GET /cgi-bin/isadg/viewrg.pl?rgid=44 HTTP/1.1" 200 2286 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; fr-fr) AppleWebKit/531.21.8 (KHTML, like Gecko) Version/4.0.4 Safari/531.21.10"