Monday, February 11, 2013

"Spring Security 3.1" from Packt

It's almost 3 years since I reviewed "Spring Security 3" from Packt Publishing. How time flies!

Now they just published the updated version, Spring Security 3.1 and asked me again for the review. So, I devoted some time to it, and browsed through the new version of this book.

This time, I used the e-book version (in EPUB format). But, even though it is readable in this format, the source code samples got wrapped in many places, make it more difficult to understand, and the diagrams were partially clipped - well, at least on my reader. Obviously, this is a problem of many technical books, reading them on the kindle-like device is less comfortable. Fortunately, having access to e-book you can also open PDF version, to check things like source code on bigger screen.

Generally, the book is a continuation of the previous edition, which I reviewed in the past. The book has been seriously "refactored", but you can easily spot the core taken from the first edition. The big amount of changes come, I think, from the fact that the primary author has changed: the book is now signed by Robert Winch, the official leader of the Spring Security project.

The final effect is really good - the first edition was already good, and the second is even better - I think the re-arrangement of some chapters and rewriting some of them, made them easier to understand. In my review of the first edition I also listed some things which could be improved - and i noticed some of them were really done in the second edition; some of them didn't, for example I still think it would't harm if some attacks, like XSS, were briefly explained when they are mentioned for the first time, instead of only providing a link to OWASP - in the book which has more than 300 pages adding one paragraph with the description of given attack would cost nothing, and these are kind of things which you'd rather know too much about than too little. However, even if there are some wrinkles left that could be iron out, I must say I haven't found any serious issues with this book.

So, overall, it's a really good guide to Spring Security, if you're planning to start using this framework, it's definitely worth reading!

Tuesday, August 17, 2010

"Spring Security 3" by Packt Publishing - review

I've just finished reading Spring Security 3, and I may honestly recommend it to any Spring Security user. Peter Mularien's book is well written and easy to follow. I can't say it's the best Spring Security book on the market - simply because it is the only one... so there is no comparison to make. But even without it, it is highly recommended and worth its price.


The book is especially important, because Spring Security (SS - for short) documentation is not so user friendly as, for example, Spring Framework documentation. Spring Framework has one of the best documentation (I mean reference manual here) of open source project I ever see. Spring Security also has such manual, which is not that bad, and is definitely a good start point (when I started with SS, there was no other point of reference, so it had to suffice to start) - but it can't be compared to Spring Framework documentation. Because of this, any additional high-quality materials on SS are valuable. The Mularien's book is definitely such material.


The book covers all important topics related to Spring Security: authentication and authorization mechanisms, namespace and bean-based configuration, additional services (e.g. remember me, session management, custom filters), advances topics, like ACLs, and integration features (OpenID, CAS, etc). Really broad spectrum of SS applications is covered there. I really like the approach of presenting logic flow and classes relationships on diagrams - in case of such complicated beast as Spring Security is, they are really necessary to get a clear picture. Also, I really liked the fact that author points the user to the places in code or javadocs where additional or comprehensive information may be found - for example he put a note: "Methods and pseudo-properties for SpEL access expressions are declared by the public methods provided by the WebSecuirtyExpressionRoot class, and its superclasses" - very practical link to the right place (in Spring Security it is often not easy to find the exact class, which JavaDoc contains relevant information - that's why such information is valuable). The book also warns about some peculiarities of SS naming (e.g. interface Authentication is implemented by classes XxxAuthenticationToken - really strange and not intuitive), so it's good to be warned, it's then easier to remember such facts and not get lost.


To make this review fair, I have to also point some shortcomings.


First - some things which are not directly related to Spring Security, and because of this don't have to be described in this book, but are so closely related to the topic that I believe they deserve a bit more focus. For example, author mentions some typical security flaws and attack, but doesn't describe them (only points to external resources). I understand it's a book about Spring Security, not general web security. But in such context adding 3 more pages describing few most important threats, mistakes and attacks (SQL injection, XSS, CSRF) would be nice, I think. For sure the book will be read also by some beginners, who may not be aware of those attacks. And two other small missing things. The book described hypothetical security audit results: audit identified that user passwords were stored in clear text in database - and then we can read how to fix it. However, it doesn't mention the typical security problem: storing database password in clear text in spring security configuration file (e.g. if you use standalone DBCP bean) - which would be probably identified by such audit too. Obviously, Spring Security (nor Spring Framework, unfortunately) doesn't have any answer to this, and that's why it is not in the book. Still, if we talk about securing web application, I expected this problem to be at least mentioned there. And, similarly, it would be nice to have some information about protecting from brute force attack on passwords in database - again, Spring Security doesn't have any built-in tools for dealing with it, but it is worth mentioning at least (for example, how we can use Spring Security events in order to try to identify such attacks).


For me, the biggest missing point of the book is the lack of description of using UserDetails with mutable objects, such as JPA @Entities. This used to be one of the most unclear points in Spring Security - documentation in the past recommended using immutable objects as UserDetails implementations, but most of applications used some sort of ORM, like Hibernate, with mutable entities. The documentation has been actually fixed - now UserDetails javadoc clearly says that immutability is not required. However, taking into account that it is so common setup, and it used to be unclear in context of UserDetails, and caused many questions on SS forums, I expected to have example with such setup in this book - with info how it influences user caches etc (UserCache is not mentioned at all in the book, if I remember correctly).


Few other points - remember me services are described very good, but then, when second type of remember me is introduces, author says that, "...something that you may have noticed by now...", basic Remember Me tokens will not survive server restart. I rewound back to Remember Me service description and tried hard to find out why it won't - but couldn't find any clue. So something is missing here (or I didn't read carefully enough). Next: on page 143, author says "be aware that it is strongly encouraged to declare AOP rules on interfaces, and not on implementation classes". Well, this is embarrassing. I always though exactly opposite. See the citation from Spring Framework reference docs: "Spring recommends that you only annotate concrete classes (and methods of concrete classes) with the @Transactional annotation, as opposed to annotating interfaces." (and @Transactional is definitely kind of AOP rule). Next weak point: in section devoted to session-fixation attack protection, description of the attack itself is very unclear, and doesn't touch real problem I think. The book says "if hacker steals your session, he can only use it until you log in, because session-fixation protection will change your session identifier then". But if he stole my session before I logged in, he can do it again after, so what's this protection about? For me, the real session fixation attack is based on sending someone a link with session ID in it. So in fact the hacker don't steal the session, instead he "suggests" you the session. (Maybe this is not the only possible case, but the most popular at least.) Then this protection makes sense. But you probably won't figure it out from the book.


All in all, despite those small shortcomings, the book is really good, and highly recommended to everybody who starts using Spring Security, or already knows it, but doesn't feel Spring Security expert yet (also experts can learn some new things from this book, for example bout integration with external authentication systems). Because of those small shortcomings, I wouldn't probably give it full 5 stars, but 4.5 is a fair note.

Wednesday, July 21, 2010

"Spring Security 3" published by Packt

Recently (well, actually more than a year) I had not much time for updating this blog - this is because we started our own business and own project, which is very challenging and consumes 100% of my precious time.


This month, however, I was contacted by Packt Publishing, as they just published the book on Spring Security 3 and asked me if I would like to get a copy of this book, in order to write a review. From what it seems (at least looking at Amazon), it's the very first book on the market dedicated to Spring Security (there are books on Spring, containing some chapters about Spring Security, but it looks like there was no book on Spring Security itself). Obviously, I was interested, and just today I found the book in my mailbox. First impressions are very good: for example, fast scanning of the book reveals many UML and pseudo-UML diagrams, presenting decision flows and Spring Security classes hierarchies - this is great, personally I believe that learning this framework without such diagrams is hardly possible. The plethora of Spring Security classes, interfaces, and relations between them, makes them really hard to follow - that's why such diagrams are so important.


So, I start to read the book, and you can expect a new post with the book review soon.

Thursday, May 7, 2009

Spring Framework 3.0 M3 and REST enhancements

In my previous blog post, I described how Spring 3.0 Milestone 2 can be applied in real REST web service project, and I enumerated several limitations and problems. M3 is out now. What has changed then?


M3 contains several enhancements over M2, addressing my postulations:



  1. Fixed URL mapping with @RequestMapping annotations: type-level and method-level finally work together in the logical manner, allowing for easy writing of one controller per one REST resource (SPR-5631). I also noticed that at container startup, all the resolved mappings are printed to the log at INFO level - that's nice too.

  2. Trailing slashes insensitive mapping (SPR-5636)

  3. Input data (body) conversion - easy access to request body as controller method param, converter for form encoded data (and other types) (SPR-5409)

  4. Type conversion error for @PathVariable annotated params returns code 400, not 500 (SPR-5622)

  5. Custom processing of object returned from controller method (SPR-5426)

  6. Header-based request matching (SPR-5690)

  7. Flexible exception handling (SPR-5622 and related)


I haven't yet verified in practice the last three of them, but overall it looks pretty good, and the picture is now much more complete than in case of M2.


Small bugs that I've found:


  • Exception when method-level annotation starts with slash (SPR-5631, SPR-5726) - should be already fixed

  • Code 404 returned, when it should be 405 (SPR-4927)


One thing that worries me is the sentence from M3 documentation: Available separately is the JacksonJsonView included as part of the Spring JavaScript project. I thought it was supposed to be moved to core REST packages? As I said earlier, JSON is not a JavaScript. It's a data exchange format you can use to talk Java to .NET, with no JavaScript involved. So having dependency on JavaScript project makes no sense.


I really like the idea of FormHttpMessageConverter, which can bind body data to MultiValueMap:


@RequestMapping(method=PUT, value="{projectName}")
public HttpResponse createOrUpdate(@PathVariable String projectName,
@RequestBody MultiValueMap<String,String> newValues)

It solves (at least partially) problem of servlet getParameter method used with PUT (see my post for REST in M2, and also SPR-5628). Actually, FormHttpMessageConverter calls the getInputStream method on ServletResponse, so after it is called by the framework, the attempt to call getParameter inside controller method will return null not only for PUT requests, but also for POST, because body was already parsed into MultiValueMap (this behavior is valid according to servlet API). But having the map of parameters, there is no need for calling getParameter again.


I have only one problem with MultiValueMap. Previously, I parsed manually the form data from body into PropertyValues object, which was then passed to Spring's DataBinder for updating the model entity class. Now, in M3, the form data is parsed by Spring and I get MultiValueMap<String,String> directly. That's nice, but how to pass it to DataBinder? There is a bind(Map) method in DataBinder, but the problem is that MultiValueMap<String,String> at the very bottom is actually the Map<String,List<String>>. Now if I pass it to bind method directly, I get many binding errors saying "cannot convert List<String> to String", because model class has fields of type String, and the MultiValueMap keeps it as one-element string lists. The simplest solution would be to extend MultiValueMap interface with one method, called e.g. 'asFlatMap()' or so, that would return either copy or the live view of the MultiValueMap<K,V> as Map<K,Object> with all values which are single-element lists replaced by the element itself (so Object is actually either V or List<V>). This way I could pass the map returned by this method to the DataBinder easily. The code below shows the conversion:


Map<String,Object> asFlatMap(MultiValueMap<String,String> map) {
Map<String,Object> convertedMap = new LinkedHashMap<String,Object>();
for (Entry<String,List<String>> entry : map.entrySet()) {
if (entry.getValue().size() == 1) {
convertedMap.put(entry.getKey(), entry.getValue().get(0));
} else {
convertedMap.put(entry.getKey(), entry.getValue());
}
}
return convertedMap;
}

I've created a SPR-5733 issue for this. AFAIK, Spring binding is going to be refactored in RC1, so perhaps it can be aligned with it too.

Saturday, April 25, 2009

NullPointered

As I mentioned in one of the previous posts, I have been outsourced to some project, which uses a home-made, rather naive web framework. This framework has something in common with Struts: it has an abstract Action class, that must be implemented by any action in the system. The main method from this class, that must be implemented by concrete actions, has signature of this shape:


abstract protected Result perform(...) throws ProcessDataException, NullPointerException;

So, how would you implement the "Hello World!" example in this framework? I guess, more or less this way:


public Result perform(...) throws ProcessDataException, NullPointerException {
throw new NullPointerException("Hello World!");
}

Well, this looks like a good candidate for The Daily WTF :)

Wednesday, April 15, 2009

Ext Core - a new weapon in your arsenal

Recently, the Ext-JS team announced introduction of the new product, Ext Core. At this moment it is in the beta stage, but I guess we should see the final version soon. The good news is that Ext Core will have free MIT license for any use, as opposed to Ext-JS, which is not free for commercial use.


Why is this announcement so important? Well, the Ext-JS is excellent JavaScript library, so if you want to build the really big pure-javascript client, Ext-JS is the best choice. However, such clients are still not that popular, and the server-side frameworks remain mainstream. So what most developers do is enhancing the server-side-generated web pages with javascript addons. The low-level DOM manipulation libraries, like Prototype or JQuery are perfect for this purpose. They are lightweight and free, and that's why they got so popular. Obviously, you could use Ext-JS for the same purpose, because it contains the low-level tools capable of doing the same thing. But the whole power of Ext-JS, the component-based UI, wouldn't be used at all. So would you pay for Ext-JS with features you won't use, if the JQuery can do it for free? Obviously, not. This I believe was an important barrier for Ext-JS adoption.


Now, from what I understand, beginning from from version 3, Ext will be split into two distinct layers (and products). Ext Core will contain the low-level code, for dealing with DOM elements and with API concentrated around Ext.Element class, while Ext-JS will contain full Ext distribution: bundled Ext Core plus all the UI widgets, tools and managers, with API concentrated around Ext.Component.


This means that Ext Core becomes a viable alternative for jQuery: it's also lightweight and free, and has fantastic community support on Ext-JS forum (I must say I've never met the second forum where the questions receive so fast responses, in spite of traffic being really high). I haven't done the comparison of Ext Core vs JQuery in terms of API: which library is more powerful and/or simpler. Many concepts are similar, even if they are named or implemented a bit different. In fact, I haven't used much of the low-level features of Ext-JS before, because when you use Ext components, it is usually not needed to bother with DOM-level details. The Ext Core is at beta stage, so probably API can evolve a bit yet. (For example, it's strange for me that there is no simple method in Ext.Element to get attribute value of the element; there is only namespaced-version getAttributeNS(namespace, name), but why there is no getAttribute(name)? How often do you come across namespaced attributes in HTML pages?)


One big advantage of Ext-JS is documentation. I've always found the Ext-JS API documentation more clear and complete than that of jQuery. Ext Core, I hope, will not be worse in this area. At this moment there is an API documentation available, but it also looks like "beta" version, because there is no details section (like in standard Ext-JS docs), and method links point directly to the source, instead of to the details section. That's good idea to link to the source, but I shouldn't be forced to go to the source every time I want to check the full method description. I hope this will be fixed before final release. There is also the Ext Core manual available, which looks very promising, even if it is also not finished yet (there are typos and errors, and some sections are only sketched). One thing is very interesting: authors of the manual teach you the Ext Core with aid of FireBug: there are plenty of examples based on FireBug. That's really cool. On the other hand, the section describing the effects is poor: you see red rectangle and the code which is supposed to do some effects on it (fade, switch off, ...), but you can't execute the code and see it in action. Quite annoying. But I hope it will be also fixed before Ext Core goes final.


Ext Core being free, lightweight and high-quality, should attract many developers. And obviously, in the longer term those people who started to use Ext Core and got to know it well, will be more eager to do one step further and switch to the "full version" of Ext-JS. Things are getting more and more interesting in JavaScript frameworks circle. With the new player, I guess we'll watch some significant changes in JavaScript libraries usage statistics soon.

Thursday, April 9, 2009

Fortunately, XSLT is dead (in web frameworks)

Several years ago there was a trend in Java web frameworks to use XML processing as a foundation for the framework logic: take the data from database and present it as XML document (either convert it from relational data, or use XML capable database), and transform it to HTML using XSLT. There were some books written about it, and some frameworks created (like Cocoon). After some time the hype started to decline, and currently virtually no modern web framework apply this approach, I think.


Actually I've never used it on myself, but I liked the idea at that time. It seemed to be clear and powerful. Funny that only now I have a chance to check how it works in practice. And at least now I know why this approach failed.


For two weeks, I'm outsourced to help some company in their web project. They use their own, home-made web framework for this, to talk to their own, home-made document-repository system. Unfortunately, both seemed to be buggy and unstable, though they say that's not the first project they use them in (I can't believe it's true, really). I guess they would be better off using standard Java Content Repository (JSR-170) for implementing documents repository, and some modern web framework instead of their own home-grown one. If they insisted on XML/XSLT transformations, they could use Cocoon. At least there would be more documentation available, and it would be well-tested and stable. But ok, that's not the first company that suffers from NIH syndrome, or guys are simply too lazy or overloaded to look around for other stuff. The interesting point is: how the XML-based processing works in practice? The short answer is: very poor. And the weakest link in the whole chain is XSLT.


XSLT bloat


I won't dare to say that XSLT is worthless - perhaps in some contexts it can be useful, especially for transforming one document tree into another (valid) document tree, i.e. from DOM to DOM. XSLT gives you guarantee that input and output documents will be valid XML - this is crucial e.g. in SOA applications, transforming one document into other documents, to be processed by machines. (X)HTML is a document tree too, at least formally, but from the point of view of web browser to have perfectly valid XHTML is good, but not crucial, and from the point of view of web designer or developer the DOM behind it doesn't matter at all, and making the template valid XML is of no importance. For dynamic generation of HTML pages in most cases it is much easier if you treat the HTML code as a unformatted text, and make a web page template by embedding some special processing directives in such text. This approach was applied by JSP (first with scriptlets, and than with JSTL), Velocity, FreeMarker, and other technologies. Neither of those technologies use the strict XML as template. On the opposite side we have JSPX (JSP using strict XML) - it never caught on and I guess many Java developers have never met it; and XSLT.


I've used a lot of JSPs with JSTL. It wasn't perfect, but it worked. Now I have to do the same with XSLT and it's a nightmare. Things that took me half an hour to do with JSP take several hours in XSLT. This is a list of things I hate the most in XSLT:



  1. Conditional arguments. For example: how to hide the row in table (using different CSS style), based on some CONDITION, with XSLT? See:

    <tr>
    <xsl:attribute name="style">
    <xsl:choose>
    <xsl:when test="CONDITION">
    <xsl:value-of select="'visibility: visible'">
    </xsl:when>
    <xsl:otherwise>
    <xsl:value-of select="'visibility: collapse'">
    </xsl:otherwise>
    </xsl:choose>
    </xsl:attribute>
    ...
    </tr>

    and now the same with JSP 1.x:

    <tr style='visibility:<%=CONDITION ? "collapse" : "visible"%>'>
    ...
    </tr>

    or with JSP 2.x:

    <tr style='visibility:${CONDITION ? "collapse" : "visible"}'>
    ...
    </tr>


  2. Nested loops. In JSTL the <for-each> tag has the var attribute, which is variable that gets assigned the current element from the collection during looping. In nested loops, you choose different var names, and you have easy access to variable at any level. In similar <for-each> in XSLT there is not var attribute. You must use additional variables as child nodes, or some other workarounds. Its very easy to get lost.

  3. Every XML-like fragment which is not an actual XML must be escaped. Say you have inlined javascript function which appends row to the table:
    onclick="append('<tr><td></td><td></td></tr>')"

    This will work in JSP quite good, but will blow up in XSLT with "could not compile stylesheet" message. You must escape each < character:
    onclick="append('&lt;tr>&lt;td>&lt;/td>&lt;td>&lt;/td>&lt;/tr>')"

    Nobody could understand what is going on here at first look now.

  4. The functional approach applied in XSLT design, instead of the well known (for all programmers) procedural one, makes "thinking in XSLT" very hard. "Normal" approach (JSP, Velocity, etc) takes the HTML template starting from familiar <html><head>...<body>... and looks for special markers, where it puts data from "model". This data can be Java object or XPath-extracted data from other XML. XSLT does it in completely reverse way: it starts with <template match="..."><apply-templates>... so it takes the XML data document first, and tries to manipulate its content to obtain other document. As I said, in SOA processing this is fine. But in HTML generation it looks completely alien. I must say I always had problems with mental visualization of this process.

  5. No formal XML schema for XSLT 1.0 exists. At least I couldn't find it - there is only unofficial DTD somewhere. This ruins IDE code completion abilities. And the XSLT is so complicated, that you can't simply learn it in one day (or even week), so some inline-help would be of real help.


Now take all those points together and multiply by all the places on the web page where dynamic content generation happens. Lengthy, complicated, with all those escapings, plus complicated XPath expressions, plus this "other-way-round" functional approach. That is developer horror. And worse, it is maintainer hell. In the current project, after two weeks I'm not able to understand the sections I have written a few days ago. I cannot imagine looking at it after several months. What's worse, the template code is so different than resultant HTML code, that navigation in the template and finding the actual place I need to edit takes always too much time.


After two weeks I'm fed up with XSLT. It's absolutely unproductive in web frameworks. Now I know why none of XML-based frameworks got big popularity ever. And I know I was completely wrong 3 years ago when I could bet that in the short time JSPX would replace JSP. Fortunately, it didn't.


Any alternatives?


Now that we know XSLT is evil, which view technology should we use in our projects? Stick with JSP? For JSF-based project I wouldn't use JSP for sure, because it simply doesn't play well with JSF, and instead I would go for something like facelets. But actually I wouldn't go for JSF either any longer (that's another story about the thing I used to believe in and predict long life to it, and that I was finally disappointed completely with). So for non-JSF projects there are JSP/JSTL, Velocity, FreeMarker, GSP for Grails projects, SiteMesh for page layouts, perhaps other technologies that I'm not aware of. JSP/JSTL is the most widely used, best known, and has best tool support, even if it is probably the worst one from the group. Take those crazy SQL-based tags in JSTL, or funny standard tags from JSP to deal with request parameters. Whey didn't they just took those tags from JSTL that are used always (if, for-each, format, ...) and make them part of standard JSP? Why I always have to include it as separate library on Tomcat? Besides, I said earlier that the actual input template doesn't have to be valid XML, but I must say I don't like constructs like <input value="<c:out ...>">. Tag inside other tag's attribute - this looks horrible. That is why now I think that template directives should not be constructed with XML tags: so all those custom tags, JSTL tags etc is the wrong direction. Such code is simply too hard to read, because it resembles too much HTML (ok, templates not always are used to generate HTML, but I guess in about 95% of cases they are). The better approach is to use some special characters, like # directives in Velocity, or [# in FreeMarker, or EL syntax ${...} in JSP/JSTL (but EL is too much limited and it is actually not directive; besides assumption that only getters can be called from EL was serious mistake: e.g. you cannot check collection size, because there is no getSize() method, only size()). Compare the if-then-else block created with JSP/JSTL:


<c:choose>
<c:when test="${expr1}">
...
</c:when>
<c:when test="${expr2}">
...
</c:when>
<c:otherwise>
...
</c:otherwise>
</c:choose>

with the same written with Velocity:

#if (expr1)
...
#elseif (expr1)
...
#else
...
#end

Which one is easier to read?


FreeMarker can be also good replacement for existing projects that already use XSLT. From what I see, you can bind the XML data document to some variable, and then access it with XPath queries from FreeMarker template to extract data. Velocity offers similar thing, it's called DVSL, but I doesn't look good to me, because it applies the same functional, other-way-round-alien-looking "apply-templates" approach as XSLT.


Velocity or FreeMarker integrates also well with Spring. Form my point of view, the only serious drawback of those technologies when compared to JSP, is IDE support. In the company where I do my painful job with XSLT, "the only IDE" is NetBeans (I think it is not obligatory, but simply all guys use it and all projects are not build with external Ant script or Maven, but simply by NetBeans, so it is hard to use other IDE anyway). I tried to find some plugin with Velocity or FreeMarker support for Netbeans (at least for syntax highlighting), but looks like there is no one. That's really strange for me - those technologies are on the market for many years, and are quite popular I think. So why there is no support in second most popular Java IDE for them? For Eclipse the situation is better, from what I see.


So if you start new project, think twice (or ten times) before jumping into XSLT. And if you use Eclipse, you can even think twice before using JSP/JSTL. Velocity or FreeMarker might be a better option.


Footnote: this article was selected for the "Developer's Perspective" column at the Javalobby News newsletter from Apr 21, 2009; and also reposted on DZone, where it has started very interesting discussion. If you are interested, look at comments there: you will also find some good points in favor of XSLT, to complete the picture.