Get ahead
VMware offers training and certification to turbo-charge your progress.
Learn moreAs you probably have seen, we have just announced the GA release of Spring Data release train Ingalls. As the release is packed with way too many features to cover them in a release announcement, I would like to use this post to take a deeper look at the changes and features that come with the 15 modules on the train.
A very fundamental change in the release train’s dependencies is the upgrade to Spring Framework 4.3 (currently 4.3.6) as the baseline. Other dependency upgrades are mostly driven by major version bumps of the underlying store drivers and implementations that need to be reflected in potential breaking changes to the API exposed by those modules.
Ingalls also ships with a new Spring Data Module: Spring Data LDAP. The Spring LDAP project has shipped Spring Data repository support for quite a while. After a couple of glitches and incompatibilities we decided to move LDAP repository support into a separate Spring Data module so that it is more closely aligned to the release train.
Another big change to the module setup is that Spring Data for Apache Cassandra has now become a core module, which means it now has been and is going to be maintained by the Spring Data team at Pivotal. A great chance to thank the previous core maintainers David Webb and Matthew T. Adams for all their efforts.
Besides those very fundamental changes, the team has been working on a whole bunch of new features:
Use of method handles for property access in conversion subsystem.
Support for XML and JSON based projections for REST payloads (Commons)
Cross-origin resource sharing with Spring Data REST
More MongoDB Aggregation Framework operators for array, arithmetic, date and set operations.
Support for Redis Geo commands.
Upgrade to Cassandra 3.0 with support for query derivation in repository query methods, User-defined types, Java 8 types (Optional, Stream), JSR-310 and ThreeTen Backport.
Support for Javaslang’s Option
, collection and map types for repository query methods.
These are the ones that I would like to discuss in the remainder of this post.
A major theme of our release train was to improve the performance in how our object-to-store mapping subsystem accesses data from domain classes. Traditionally, Spring Data has used reflection for that, either inspecting the fields directly or invoking accessor methods of properties.
Although the perfomance of reflection has significantly improved in Java 8, there is still a different way that we can use to even bring the performance close to native access: MethodHandle
s. They are especially fast to use if they are held in static fields of a class, which poses a bit of a challenge to us as we do not know about the structure the domain types you want to persist beforehand. However, we already apply a similar kind of optimization to the creation of domain object instances by using ASM to generate tailor made factories that invoke constructors directly. We now went ahead and applied the same idea to our PersistentPropertyAccessor
implementations: we inspect the types and ASM-generate a class that holds static final MethodHandle
s, that our API to read and write properties then use to avoid reflection. In case the classes expose public API (e.g. accessors) we just use those.
In case you are interested, the implementation code can be found here. However, brace yourselves, ASM code might feel a bit complicated to read. All Spring Data modules using the object-to-store mapping benefit from this change (i.e. JPA does not) if you are running at least Java 7. You can find more details in the ticket requesting that change. We have seen performance improvements from 20 to 70%.
Spring Application Events are usually used to publish technical events within an application. However, they’re also a great tool to decouple parts of a system by using the infrastructure for domain events. This is usually implemented like that:
class OrderManagement {
private final ApplicationEventPublisher publisher;
private final OrderRepository orders;
@Transactional
void completeOrder(Order order) {
OrderCompletedEvent event = order.complete();
orders.save(order);
publisher.publish(event);
}
}
See how the aggregate root produces an event which a service component then publishes via Spring’s ApplicationEventPublisher
. The pattern is a nice one in general but involves quite a bit of ceremony and introduces a technical framework dependency in a business component which one might like to avoid.
With the Spring Data Ingalls release train, repositories now inspect the aggregates handed to save(…)
methods for the aggregate method annotation @DomainEvents
, invokes that method and automatically publishes the returned objects via the event publisher. So assuming an Order.complete()
implementation looking something like this (AbstractAggregateRoot
is a Spring Data provided type containing the annotated methods):
class Order extends AbstractAggregateRoot {
Order complete() {
register(new OrderCompletedEvent(this));
return this;
}
}
the client code can be simplified to
class OrderManagement {
private final OrderRepository orders;
@Transactional
void completeOrder(Order order) {
repository.save(order.complete());
}
}
As you can see, there are no references to Spring infrastructure anymore. The event publication is taken care of by the component responsible for it: the aggregate root. Read more about that new mechanism in the reference documentation. There are more advanced ideas regarding domain events circling around in the team currently. Watch this space for further updates.
Pagination queries with Spring Data MongoDB and Spring Data JPA now benefit from an improved fetching strategy that more aggressively tries to avoid executing a count query. Constructing a Page
requires the fetched data and usually the total record count returned by the query. While data queries can be optimized with range selection and indexes, count queries are quite expensive because they require a scan of the table or an index. In case you request the last, only partially-filled page, we can skip counting the records as the toal number of elements can be calculated from the offset and numer of items in the result page.
Another performance-related change was made in Spring Data MongoDB’s DBRef
fetching. Collections of references are fetched in a single bulk-operation if the references in the collection point to the same database collection. That means, wen can basically read the related collection using a single query instead of one for each element.
The Evans and Hopper release trains shipped with projection features that allow customizing the view on existing domain objects by applying projection interfaces. Projections can be used in application code (repositories or manually implemented Spring MVC controllers) or with Spring Data REST to expose a dedicated view on a domain object through a web endpoint. The projection also could have been used to bind form submissions (see this example for details). With Ingalls we now extend that support to handle JSON and XML requests:
@RestController
class UserController {
/**
* Receiving POST requests supporting both JSON and XML.
*/
@PostMapping(value = "/")
HttpEntity<String> post(@RequestBody UserPayload user) {
return ResponseEntity
.ok(String.format("firstname: %s, lastname: %s",
user.getFirstname(), user.getLastname()));
}
}
@ProjectedPayload
public interface UserPayload {
@XBRead("//firstname")
@JsonPath("$..firstname")
String getFirstname();
@XBRead("//lastname")
@JsonPath("$..lastname")
String getLastname();
}
Projection interfaces are annotated with @ProjectedPayload
to enable projection and projection method annotations contain a JSON path or XPath expression.
If these property annotations are omitted, we are going to assume defaults (i.e. $.firstname
or /firstname
etc. in the example above). The fundamental idea here is to — instead of using an object structure to map incoming data — rather point exactly to the parts of a payload that you are interested in. The use of JSON Path expressions or XPath allows you to be more lenient about the actual location of the element you want to access so that a change in the payload structure does not necessarily break the consumer. See how the example above looksup firstname
anywhere in the document. If the party producing the JSON decided to all of a sudden nest that into e.g. a user
document or XML sub node, nothing would need to change in the consuming code.
If you want to use that kind of payload access on the client, you can simply register the corresponding HttpMessageConverter
instances on a RestTemplate
:
@Configuration
class Config {
@Bean
RestTemplateBuilder builder() {
return new RestTemplateBuilder()
.additionalMessageConverters(new ProjectingJackson2HttpMessageConverter())
.additionalMessageConverters(new XmlBeamHttpMessageConverter());
}
}
The projection binding support uses JsonPath to evaluate JSON-path expressions and XMLBeam to evaluate XPath expressions. You can find a complete example for this in in the Spring Data Examples repository.
Using client-side JavaScript requests inside of browsers is restricted by the same-origin policy. Requesting data from other sources than the application server is a forbidden by default because it is a cross-origin request. Enabling Cross-Origin Resource Sharing (CORS) requires the target server to provide CORS headers to be sent with every HTTP response. The Ingalls release of Spring Data REST now allows you to easily:
@CrossOrigin
public interface CustomerRepository extends CrudRepository<Customer, Long> {}
GET /customers/1 HTTP/1.1
Origin: http://localhost
HTTP/1.1 200 OK
Vary: Origin
ETag: "0"
Access-Control-Allow-Origin: http://localhost
Access-Control-Allow-Credentials: true
Last-Modified: Tue, 24 Jan 2017 09:38:01 GMT
Content-Type: application/hal+json;charset=UTF-8
Exported domain classes and repositories can be annotated with @CrossOrigin
to enable CORS and the annotation can be used to customize the setup. For more global configuration you can use RepositoryRestConfigurer.configureRepositoryRestConfiguration(…)
to gain full control over the CORS setup across all Spring Data REST exposed resources.
@Component
public class SpringDataRestCustomization extends
RepositoryRestConfigurerAdapter {
@Override
public void configureRepositoryRestConfiguration(
RepositoryRestConfiguration config) {
config.getCorsRegistry().addCorsMapping("/person/**")
.allowedOrigins("http://domain2.com")
.allowedMethods("PUT", "DELETE")
.allowedHeaders("header1", "header2", "header3")
.exposedHeaders("header1", "header2")
.allowCredentials(false).maxAge(3600);
}
}
Find more details about that in the reference documentation.
The MongoDB team adds new aggregation framework operators on a regular basis. With the Ingalls release train, we took the chance to enhance Spring Data MongoDB’s set of available operators to align with MongoDB ones and how you interact with these. This release adds native support for the following aggregation operators and aggregation stages:
$anyElementTrue
, $allElementsTrue
, $setEquals
, $setIntersection
, $setUnion
, $setDifference
, $setIsSubset
$filter
, $in
, $indexOfArray
, $range
, $reverseArray
, $reduce
, $zip
$indexOfBytes
, $indexOfCP
, $split
, $strLenBytes
, $strLenCP
, $substrCP
$stdDevPop
, $stdDevSamp
$abs
, $ceil
, $exp
, $floor
, $ln
, $log
, $log10
, $pow
, $sqrt
, $trunc
$arrayElementAt
, $concatArrays
, $isArray
$literal
, $let
$dayOfYear
, $dayOfMonth
, $dayOfWeek
, $year
, $month
, $week
, $hour
, $minute
, $second
, $millisecond
, $dateToString
, $isoDayOfWeek
, $isoWeek
, $isoWeekYear
$count
, $cond
, $ifNull
, $map
, $switch
, $type
$facet
, $bucket
, $bucketAuto
$replaceRoot
, $unwind
, $graphLookup
Aggregation operators have entry points for creation and are built in a fluent style. Multiple aggregators are grouped in facades like ArrayOperators
, ArithmeticOperators
and many more. Field references and aggregation expressions can be used in the entry point methods. Entry points to aggregation stage operators are accessible via Aggregation
.
Aggregation.newAggregation(
project()
.and(ArrayOperators.arrayOf("instock").concat("ordered")).as("items")
);
Aggregation.newAggregation(
project()
.and(ArithmeticOperators.valueOf("quizzes").sum()).as("quizTotal")
);
Aggregation.newAggregation(
group().stdDevSamp("age").as("ageStdDev")
);
Aggregation.newAggregation(Employee.class,
match(Criteria.where("name").is("Andrew")),
graphLookup("employee")
.startWith("reportsTo")
.connectFrom("reportsTo")
.connectTo("name")
.depthField("depth")
.maxDepth(5)
.as("reportingHierarchy"));
Aggregation.newAggregation(bucketAuto("field", 5)
.andOutputExpression("netPrice + tax").as("total")
);
Any currently unsupported aggregation operators and expressions can be used by implementing AggregationOperation
or AggregationExpression
respectively. Please also note that some of these operators were introduced in very recent MongoDB versions and can only be used with those.
The growing number of operators opens up a whole lot new set of possibilities to combine them with each other. Operators can be nested in various combinations, which can sometimes lead to code that is hard to read.
newAggregation(
project()
.and(ConditionalOperators.when(Criteria.where("a").gte(42))
.then("answer")
.otherwise("no-answer"))
.as("deep-tought")
);
To simplify this code we now support Spring Expression Language (SpEL) expressions to formulate the same projection like this:
newAggregation(
project()
.andExpression("cond(a >= 42, 'answer', 'no-answer')")
.as("deep-tought")
);
SpEL support in aggregations is not something entirely new. In fact, it has been available since Spring Data MongoDB 1.6. So far it supported arithmetic operations (like '$items.price' * '$items.quantity'
). The new bit that Ingalls adds here is that now aggregation operators can be expressed as functions that accept parameters. You pass fields to aggregation operators by using their field names. The aggregation framework then evaluates SpEL expressions and creates the BSON documents for the aggregation operator.
The gateway to SpEL is AggregationSpELExpression.expressionOf(…)
that allows handing in SpEL expressions everywhere you are able to hand in an AggregationExpression
.
newAggregation(
group("number")
.first(expressionOf("cond(a >= 42, 'answer', 'no-answer')"))
.as("deep-tought")
)
Refer to the reference documentation or the MongoDB Aggegation Framework example for further details.
Redis 3.2 supports geo indexing and we received great support from our community regarding geo indexes. We ship geo index support with Ingalls that is available through RedisTemplate
and Redis repositories. Let us have a look at an example:
geoOperations.geoAdd("Sicily", new Point(13.361389, 38.115556), "Arigento");
geoOperations.geoAdd("Sicily", new Point(15.087269, 37.502669), "Catania");
geoOperations.geoAdd("Sicily", new Point(13.583333, 37.316667), "Palermo");
GeoResults<GeoLocation<String>> result =
geoOperations.geoRadiusByMember("Sicily", "Palermo",
new Distance(100, DistanceUnit.KILOMETERS));
List<String> geohashes = geoOperations.geoHash("Sicily", "Arigento", "Catania");
List<Point> points = geoOperations.geoPos("Sicily", "Arigento", "Palermo");
Geo indexes integrate seamless with your domain classes. Domain objects with geospatial values can be indexed in a Geo index and queried through Redis repositories. The following example shows domain class and repository interface declarations:
public class City {
@Id String id;
String name;
@GeoIndexed Point location;
}
public interface CityRepository extends Repository<City, String> {
List<City> findByLocationNear(Point point, Distance distance);
}
Declaring a repository query using the Near
or Within
keywords lets you use geospatial queries near a Point
or within a Circle
. Note the @GeoIndexed
annotation used on location
allows the usage of a geo-index that can be used with the derived geospatial query method.
Spring Data for Apache Cassandra is now a core module maintained by the Spring Data team. Besides the change in primary ownership of development effort, the Ingalls release train ships with a series of noteworthy changes to the module itself.
We upgraded to the Datastax Java Driver 3.1 and so Spring Data for Apache Cassandra now supports Apache Cassandra 3.0 (1.2, 2.0, 2.1, 2.2 and 3.0, up to 3.9).
This release also ships with support for query derivation so that you do not necessarily have to use string queries but can derive an Apache Cassandra CQL query from the query method name:
public interface BasicUserRepository extends Repository<User, Long> {
/**
* Derived query method.
* Creates {@code SELECT * FROM users WHERE username = ?0}.
*/
User findUserByUsername(String username);
/**
* Derived query method using SASI (SSTable Attached Secondary Index)
* features through the {@code LIKE} keyword.
* This query corresponds with
* {@code SELECT * FROM users WHERE lastname LIKE '?0'}.
* {@link User#lastname} is not part of the
* primary key so it requires a secondary index.
*/
List<User> findUsersByLastnameStartsWith(String lastnamePrefix);
}
You can find examples for query derivation for Spring Data for Apache Cassandra in our examples repository.
Query derivation supports all predicates provided by Apache Cassandra and ships with predicates for SASI (SSTable Attached Secondary Index) indexes. In this context, query derivation is in not opinionated about primary keys or columns with a secondary index. There is no support for AllowFiltering
yet. Also, repository query methods also support Stream
as a return type. Using a Stream
does not preload the whole result set but iterates over the results as you pull on the stream.
To round things off, you can now use JSR-310 and ThreeTen back-port types as well as JodaTime types in your domain classes that were added as part of the Java 8 support story. JSR-310 types are converted to native Apache Cassandra data types. Refer to the revised reference documentation or our Java 8 examples for details.
public class Order {
@Id String id;
LocalDate orderDate;
ZoneId zoneId;
}
public interface OrderRepository extends Repository<Order, String> {
/**
* Method parameters are converted according the registered
* Converters into Cassandra types.
*/
@Query("SELECT * from pizza_orders WHERE orderdate = ?0 and zoneid = ?1 ALLOW FILTERING")
Order findOrderByOrderDateAndZoneId(LocalDate orderDate, ZoneId zoneId);
/**
* String-based query using native data types.
*/
@Query("SELECT * from pizza_orders WHERE orderdate = ?0 and zoneid = ?1 ALLOW FILTERING")
Order findOrderByDate(com.datastax.driver.core.LocalDate orderDate, String zoneId);
/**
* Streaming query.
*/
Stream<Order> findAll();
}
Data type support is configurable by registering custom conversions. For details on this, make sure you check out the examples dedicated to this on GitHub.
A last noteworthy feature is user-defined types (UDT). With Ingalls, you can now either use mapped user-defined types embedded in your domain classes or just use the native UDTValue
type.
@Table
public class Person {
@Id int id;
String firstname, lastname;
Address current;
List<Address> previous;
@CassandraType(type = Name.UDT, userTypeName = "address")
UDTValue alternative;
}
@UserDefinedType
public class Address {
String street, zip, city;
}
Explicitly mapped iser-defined types map structured values to UDTValue
under the coverts so thatyyou can keep working with a domain class while the mapping is handled by Spring Data for Apache Cassandra.
UDT values are stored in a row which makes mapped UDTs embedded objects. You can use UDTs as singular property or as part of a collection type. If you are using schema creation, then user-defined types are created in the data store on application startup. UDTs are value objects conceptually, which means that updates to UDT values (by saving a domain object) result in replacing the entire value.
For details on particular features please refer to the revised reference documentation or the UDT example.
Spring Data repositories now support Javaslang's Option
and collection types as return values for repository query methods. Option
can be used as an alternative to JDK 8’s Optional
, Seq
can be used as an alternative to JDK’s List
. Javaslang’s Set
and Map
are supported, too, and transparently mapped from their JDK counterparts.
public interface PersonRepository extends Repository<Person, Long> {
Option<Person> findById(Long id);
Seq<Person> findByFirstnameContaining(String firstname);
}
For more information see the JPA with Javaslang example.
The Spring LDAP project has shipped support for Spring Data repositories itself for quite a while. With Ingalls, we have extracted that support into a Spring Data module, so that changes that we make to our internal SPIs propagate to the LDAP based implementation more quickly.
If you are an existing Spring LDAP repositories user, you are affected by this change and need to adopt two changes to your project:
Add Spring Data LDAP
to your project dependencies.
Change the packages to the repository components from org.springframework.ldap.repository
to org.springframework.data.ldap.repository
.
That said, Spring LDAP 2.3.0 already removed its repository support and if you follow the steps above you can continue using LDAP repositories with Spring Data LDAP 1.0. Learn more about LDAP repositories by taking a look at our Spring Data LDAP example.
I hope I could give you a quick overview about the new features of the Ingalls release train. We’re looking forward to your feedback via our Gitter channel. Also, please go ahead and report any bugs you spot in our JIRA. Happy coding!