Roman's Blog on Tech
воскресенье, 15 сентября 2019 г.
воскресенье, 9 июня 2019 г.
Testing an Aspect in a Spring application
Actually, there are two things that are interesting from the testing perspective an an aspect:
- The business logic that the aspect executes when triggered
- The pointcut expression(s) which trigger aspect execution
Even the first of them is not so easy to test because you need an instance of ProceedingJoinPoint which is cumbersome to implement or mock (and it is not recommended to mock external interfaces, as it is explained in Growing Object-Oriented Software, Guided by Tests, for example).
The solution
Let's imagine that we have an aspect that must throw an exception if a method's first argument is null, otherwise allow the method invocation proceed.
It should only be applied to controllers annotated with our custom @ThrowOnNullFirstArg annotation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | @Aspect public class ThrowOnNullFirstArgAspect { @Pointcut("" + "within(@org.springframework.stereotype.Controller *) || " + "within(@(@org.springframework.stereotype.Controller *) *)") private void isController() {} @Around("isController()") public Object executeAroundController(ProceedingJoinPoint point) throws Throwable { throwIfNullFirstArgIsPassed(point); return point.proceed(); } private void throwIfNullFirstArgIsPassed(ProceedingJoinPoint point) { if (!(point.getSignature() instanceof MethodSignature)) { return; } if (point.getArgs().length > 0 && point.getArgs()[0] == null) { throw new IllegalStateException("The first argument is not allowed to be null"); } } } |
We could test it like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | public class ThrowOnNullFirstArgAspectTest { private final ThrowOnNullFirstArgAspect aspect = new ThrowOnNullFirstArgAspect(); private TestController controllerProxy; @Before public void setUp() { AspectJProxyFactory aspectJProxyFactory = new AspectJProxyFactory(new TestController()); aspectJProxyFactory.addAspect(aspect); DefaultAopProxyFactory proxyFactory = new DefaultAopProxyFactory(); AopProxy aopProxy = proxyFactory.createAopProxy(aspectJProxyFactory); controllerProxy = (TestController) aopProxy.getProxy(); } @Test public void whenInvokingWithNullFirstArg_thenExceptionShouldBeThrown() { try { controllerProxy.someMethod(null); fail("An exception should be thrown"); } catch (IllegalStateException e) { assertThat(e.getMessage(), is("The first argument is not allowed to be null")); } } @Test public void whenInvokingWithNonNullFirstArg_thenNothingShouldBeThrown() { String result = controllerProxy.someMethod(Descriptor.builder().externalId("id").build()); assertThat(result, is("ok")); } @Controller @ThrowOnNullFirstArg private static class TestController { @SuppressWarnings("unused") String someMethod(Descriptor descriptor) { return "ok"; } } } |
The key part is inside the setUp() method. Please note that it also allows to verify the correctness of your pointcut expression, so it solves both problems.
Of course, in a real project it is better to extract the 'proxy contstruction' code to some helper class to avoid code duplication and make the intentions clearer.
понедельник, 1 мая 2017 г.
Cassandra cqlsh client, OperationTimedOut and request timeouts
It turned out that with the default settings Cassandra's cqlsh (command line client) behaves differently. All of a sudden, my script (a sequence of DDL queries run in the beginning of an integration test to prepare database) has failed. The first error was OperationTimedOut, but the following ones were caused by the fact that the first query did not yet finish. For example, in my case the first query was DROP KEYSPACE, while the second was CREATE KEYSPACE with the same name. Of course, if failed, and the following CREATE TABLE queries failed as well.
Why does this happen? Because cqlsh has a limit (by default it is 10 seconds, according to documentation). If your query runs more than this limit, the client just fails with OperationTimedOut error message, but the query is still running on the server.
OK, how do we disable this limit, or at least configure it to be long enough?
Good news: cqlsh in Cassandra 2.1.16 has --request-timeout command line parameter and you can specify the limit there (in seconds). --request-timeout 3600 would be a good start.
Bad news: cqlsh in Cassandra 2.1.12 does NOT have that parameter yet, so this parameter is not that universal.
By the way, version reported by cqlsh (with the usual --version) is strange. I tried it with cqlsh included into Cassandra distribution for Cassandra 2.1.8, 2.1.12, 2.1.16, and in all these cases the version was reported as 5.0.1, even though 2.1.16 reports support for --request-timeout (and really supports it) and the other two versions don't.
But let's return to out limit.
Good news: ~/.cassandra/cqlshrc file allows to define this timeout in [connection] section.
Bad news: the documentation is not accurate. Although it says that the option was added in version 2.1.1 and is called request_timeout, and this is true for 2.1.16, it is NOT true for 2.1.12. In it, you have to call the option client_timeout. Moreover: in 2.1.12, according to this article, you could completely disable the timeout by assigning None. Alas, in 2.1.16 (with request_timeout) this does not work.
It is not possible to (reliably) completely disable the timeout. If you set request_timeout to 0, this will mean that any request will timeout. Negative values cause errors. So the only option is to set it to some large value (like the abovementioned 3600 seconds).
So, a kinda universal way to make sure your integration tests don't stumble upon this, is to put the following in your ~/.cassandra/cqlshrc:
[connection]
request_timeout = 3600
client_timeout = 3600
BTW, how come that DROP KEYSPACE for a keyspace with a few tables with no data in them where the cluster contains just one node could not fit into the default timeout (presumably 10 seconds) on a machine with a decent HDD which was not overloaded? It's a different story...
пятница, 10 марта 2017 г.
Peculiarities of @ControllerAdvice in Spring MVC
What is @ControllerAdvice?
@ControllerAdvice annotation is used to define some attributes for many Spring controllers (@Controller) at a time.For example, it can be used for a centralized exception control (with @ExceptionHandler annotation). This post will concentrate on this use.
Global and specific advices
If a @ControllerAdvice does not have any selectors specified via annotation attritubes, it defines a global advice which affects all the controllers, and its @ExceptionHandler will catch all the exceptions thrown with handle methods (and not just these exceptions, see below).In Spring 4.0, the ability to define specific advices was added. @ControllerAdvice now has some attributes with which we can define advice selectors. These selectors define the scope of the advice, i.e. the exact set of controllers which will be adviced by it.
Global @ExceptionHandler catches 'no man's' exceptions
'No man's' exceptions are exceptions which occur before the handler to process the request is obtained.So, if we don't specify any attributes at @ControllerAdvice annotation, its @ExceptionHandler method will catch even HttpRequestMethodNotSupportedException when someone tries to issue a GET request to our POST-only controller.
But if we define a class, annotation or an existing basePackage, then the advice will not be global anymore and will not catch 'no man's' exceptions.
basePackages does not work for controllers proxied with Proxy
To have the ability to intercept controller invocations (for example, to handle exceptions), our advice has to wrap controller instance in a proxy. Proxy creation options:- If the controller class has at least one implemented interface, an interface-based proxy is created using Proxy class.
- If the controller class does not implement any inferfaces, then CGLIB is used; proxy class is created at runtime; this class extends our initial controller class.
But for the interface-proxy based option Spring has no ability to determine the real class of an instance wrapped with Proxy, so it tries to take the package of the Proxy instance. But proxy.getClass().getPackage() returns null! In Spring 4.0.5 this even causes a NullPointerException. In 4.0.9 the NPE was fixed, but the package will not be determined correctly, so basePackages will not work.
To sum up:
- If the controller class has at least one implemented interface, an interface-based proxy is created using Proxy class, and basePackages of @ControllerAdvice DOES NOT work.
- If the controller class does not implement any inferfaces, then CGLIB is used and basePackages attribute works.
What can we do?
Use annotations attribute. First, let's create an annotation like@ControllerAdvicedByMyAdvice
Then we annotate with this annotation all the controllers to which we want to apply the advice. And then we annotate the advice:
@ControllerAdvice(annotations = {ControllerAdvicedByMyAdvice.class})
This approach seems more reliable than the utilization of basePackages attribute.
среда, 26 ноября 2014 г.
Clean, safe and concise read-only Wicket Model with Java 8 Lambdas
Wicket framework uses models (IModel implementations) to bind data to components. Let's say you want to display properties of some object. Here is the data class:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | public class User implements Serializable { private final String name; private final int age; public User(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public int getAge() { return age; } } |
You can do the following to display both its properties using Label components:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | public class AbstractReadOnlyModelPanel extends Panel { public AbstractReadOnlyModelPanel(String id, IModel<User> model) { super(id, model); add(new Label("name", new AbstractReadOnlyModel<String>() { @Override public String getObject() { return model.getObject().getName(); } })); add(new Label("age", new AbstractReadOnlyModel<Integer>() { @Override public Integer getObject() { return model.getObject().getAge(); } })); } } |
Straight-forward, type-safe, but not too concise: each label requires 6 lines of code! Of course, we can reduce this count using some optimized coding conventions and so on, but anyway, anonymous classes are very verbose.
A more economical way (in terms of lines and characters to type and read) is PropertyModel.
1 2 3 4 5 6 7 8 | public class PropertyModelPanel extends Panel { public PropertyModelPanel(String id, IModel<User> model) { super(id, model); add(new Label("name", PropertyModel.of(model, "name"))); add(new Label("age", PropertyModel.of(model, "age"))); } } |
It is way shorter and still pretty intuitive. But it has drawbacks:
- First of all, it is not safe as the compiler does not check whether property named "age" exists at all!
- And it uses reflection which does not make your web-application faster. This does not seem to be critical, but it is still a little drawback.
Luckily, Java 8 introduced lambdas and method references which allow us to create another model implementation. Here it is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | public class GetterModel<E, P> extends AbstractReadOnlyModel<P> { private final E entity; private final IModel<E> entityModel; private final IPropertyGetter<E, P> getter; private GetterModel(E entity, IModel<E> entityModel, IPropertyGetter<E, P> getter) { this.entity = entity; this.entityModel = entityModel; this.getter = getter; } public static <E, P> GetterModel<E, P> ofObject(E entity, IPropertyGetter<E, P> getter) { Objects.requireNonNull(entity, "Entity cannot be null"); Objects.requireNonNull(getter, "Getter cannot be null"); return new GetterModel<>(entity, null, getter); } public static <E, P> GetterModel<E, P> ofModel(IModel<E> entityModel, IPropertyGetter<E, P> getter) { Objects.requireNonNull(entityModel, "Entity model cannot be null"); Objects.requireNonNull(getter, "Getter cannot be null"); return new GetterModel<>(null, entityModel, getter); } @Override public P getObject() { return getter.getPropertyValue(getEntity()); } private E getEntity() { return entityModel != null ? entityModel.getObject() : entity; } } |
... along with its support interface:
1 2 3 | public interface IPropertyGetter<E, P> { P getPropertyValue(E entity); } |
And here is the same panel example rewritten using the new model class:
1 2 3 4 5 6 7 8 | public class GetterModelPanel extends Panel { public GetterModelPanel(String id, IModel<User> model) { super(id, model); add(new Label("name", GetterModel.ofModel(model, User::getName))); add(new Label("age", GetterModel.ofModel(model, User::getAge))); } } |
The code is almost as concise as the version using PropertyModel, but it is:
- type-safe: the compiler will check the actual getter type
- defends you from typos better, because compiler will check that the getter actually exists
- fast as it just uses regular method calls (2 per
getObject()
call in this case) instead of parsing property expression and using reflection
Here are the drawbacks of the described approach in comparison with PropertyModel:
- It's read-only while PropertyModel allows to write to the property. It's easy to add ability to write using setter, but it will make code pretty clumsy, and we'll have to be careful and not use getter from one property and setter from another one.
- PropertyModel allows to reference nested properties using the dot operator, for instance using "outerObject.itsProperty.propertyOfProperty" property expression.
But anyway, when you just need read-only models, GetterModel seems to be an interesting alternative to the PropertyModel.
And here is a little bonus: this model implementation allows to use both models and plain data objects as sources. We just need two factory methods: ofModel()
and ofObject()
, and we mimic the magical universality of PropertyModel (which accepts both models and POJOs as first argument) with no magic tricks at all.
понедельник, 24 ноября 2014 г.
XSLT to convert log4j.xml config to logback.xml config
But for log4j.xml there doesn't seem to be any convertion tool available, and logback.xml does not understand the unconverted log4j.xml files.
So here is an XSLT which allows to convert log4j.xml files to the corresponding logback.xml configurations.
And here is an example. We have the following log4j.xml file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | <?xml version="1.0" encoding="UTF-8"?> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <appender name="default" class="org.apache.log4j.ConsoleAppender"> <param name="target" value="System.out"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %t %p [%c] - %m%n"/> </layout> </appender> <appender name="log4jremote" class="org.apache.log4j.net.SocketAppender"> <param name="RemoteHost" value="10.0.1.10"/> <param name="Port" value="4712"/> <param name="ReconnectionDelay" value="10000"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[my-host][%d{ISO8601}]%c{1}%n%m%n"/> </layout> <filter class="org.apache.log4j.varia.LevelRangeFilter"> <param name="LevelMin" value="ERROR"/> <param name="LevelMax" value="FATAL"/> </filter> </appender> <logger name="com.somepackage"> <level value="INFO"/> </logger> <root> <level value="INFO"/> <appender-ref ref="default"/> </root> </log4j:configuration> |
java -cp xalan.jar:xercesImpl.jar:serializer.jar:xml-apis.jar org.apache.xalan.xslt.Process -IN log4j.xml -XSL log4j-to-logback.xsl -OUT logback.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | <?xml version="1.0" encoding="UTF-8"?><configuration scanPeriod="10 seconds" scan="true"> <appender name="default" class="ch.qos.logback.core.ConsoleAppender"> <target>System.out</target> <encoder> <pattern>%d %t %p [%c] - %m%n</pattern> </encoder> </appender> <appender name="log4jremote" class="ch.qos.logback.classic.net.SocketAppender"> <remoteHost>10.0.1.10</remoteHost> <port>4712</port> <reconnectionDelay>10000</reconnectionDelay> <!-- this is NOT needed tor this logger, so it is commented out --> <!-- <layout> <pattern>[my-host][%d{ISO8601}]%c{1}%n%m%n</pattern> </layout> --> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> </filter> </appender> <logger name="com.somepackage" level="INFO"/> <root level="INFO"> <appender-ref ref="default"/> </root> </configuration> |
Here is github repository: https://github.com/rpuch/log4j2logback
пятница, 3 октября 2014 г.
Spring Security 3.2+ defaults break Wicket Ajax-based file uploads
Refused to display 'http://localhost:8084/paynet-ui/L7ExSNbPC4sb6TPJDblCAkN0baRJxw3q6-_dANoYsTD…QK61FV9bCONpyleIKW61suSWRondDQjTs8tjqJJOpCEaXXCL_A%2FL7E59%2FTs858%2F9QS3a' in a frame because it set 'X-Frame-Options' to 'DENY'.That seemed strange, because X-Frame-Options relates to frames which we didn't use explicitly. But when file upload is made using Ajax, Wicket carries this out using an implicit Frame.
Spring Security started adding this header starting with version 3.2, so it was actually an upgrade to Spring Security 3.2 that broke file uploads. To sort this out, it was sufficiently to change the X-Frame-Options value from DENY to SAMEORIGIN using the following snippet in web security configuration (created using @Configuration-based approach):
httpFile uploads work now, the quest is finished.
.headers()
.contentTypeOptions()
.xssProtection()
.cacheControl()
.httpStrictTransportSecurity()
.addHeaderWriter(new XFrameOptionsHeaderWriter(XFrameOptionsHeaderWriter.XFrameOptionsMode.SAMEORIGIN))