Posted on May 24, 2023
Spine 1.9.0
Base
Several significant changes have been made in this release to base
modules.
Breaking changes
-
Environment-related types were moved under
io.spine.environment
package. -
Production
environment type was renamed toDefaultMode
.
Other changes
-
It is now possible to check whether the environment is enabled; see
EnvironmentType.enabled()
. -
It is also possible to configure a callback executed upon the environment type detection; see
EnvironmentType.onDetected(...)
. -
The API for the custom environment types was changed; see
io.spine.environment.CustomEnvironmentType
and its documentation for more detail. - In
Preconditions2
helper class the following methods are now annotated with@CanIgnoreReturnValue
:checkNotEmptyOrBlank(String str, @Nullable Object errorMessage)
checkNotEmptyOrBlank(String str, @Nullable String errorMessageTemplate, @Nullable Object @Nullable ... errorMessageArgs)
checkPositive(long value)
checkPositive(long value, @Nullable String errorMessageTemplate, @Nullable Object @Nullable ... errorMessageArgs)
- Previously used
jCenter()
references were replaced withmavenCentral()
, as JCenter no longer provides a public Maven repository.
See the following pull requests for more detail:
Core Java
This update brings a number of API changes, and also addresses several known issues.
Breaking changes
The API of the ShardedWorkRegistry
has been changed.
In particular, a new PickUpOutcome pickUp(ShardIndex index, NodeId node)
method is introduced. Note, it returns an explicit result instead of Optional
, as previously. This outcome contains either of two:
ShardSessionRecord
— meaning that the shard is picked successfully,ShardAlreadyPickedUp
— a message that contains aWorkerID
of the worker who owns the session at the moment, and theTimestamp
when the shard was picked. This outcome means the session cannot be obtained as it’s already picked.
Also, there is a new void release(ShardSessionRecord session)
method that releases the passed session.
Here is a summary of code changes for those using ShardedWorkRegistry
:
Before:
Optional<ShardProcessingSession> session = workRegistry.pickUp(index, currentNode);
if (session.isPresent()) { // Check if shard is picked.
// ...
session.get().complete(); // Release shard.
}
After:
PickUpOutcome outcome = workRegistry.pickUp(index, currentNode);
if (outcome.hasSession()) { // Check if shard is picked
// ...
workRegistry.release(outcome.getSession()); // Release shard.
}
Also, the new API allows getting the WorkerId
of the worker who owns the session in case if the shard is already picked by someone else and the Timestamp
when the shard was picked:
PickUpOutcome outcome = workRegistry.pickUp(index, currentNode);
if (outcome.hasAlreadyPickedBy()) {
WorkerId worker = outcome.getAlreadyPicked().getWorker();
Timestamp whenPicked = outcome.getAlreadyPicked().getWhenPicked();
// ...
}
Other changes
-
Custom
Executor
forSystemSettings
(#1448)Now,
SystemSettings
allows customizing anExecutor
to post the system events in parallel. This provides an opportunity to improve the control over the available CPU resources on a server instance.var builder = BoundedContextBuilder.assumingTests(); var executor = ...; builder.systemSettings() .enableParallelPosting() .useCustomExecutor(executor);
-
Customization of gRPC
Server
viaGrpcContainer
(#1454)It is now possible to access an underlying instance of
Server
’s builder when configuring theGrpcContainer
:GrpcContainer.atPort(1654) // `server` is an instance of `io.grpc.ServerBuilder`. .withServer((server) -> server.maxInboundMessageSize(16_000_000)) // ... .build();
This API is experimental and may change in future versions of Spine.
-
Thorough copying of Bounded Contexts by
BlackBoxContext
’s builder (#1495)Previously, the
BlackBoxContext
instances were built on top ofBoundedContextBuilder
s by copying the internals of the latter builder. However, not all the parts were copied properly.This release improves the copying by including more pieces from the source
BoundedContextBuilder
. In particular, all changes made toBoundedContextBuilder.systemSettings()
are now transferred as well. -
Custom handlers for failed delivery of a signal (#1496)
Now, the Delivery API allows to subscribe for any failures which occur during the reception of each signal. Additionally, end-users may now choose the way to handle the reception failures in terms of action in respect to the InboxMessage of interest.
Out-of-the-box, end-users are provided with two options:
- mark the
InboxMessage
as delivered — so that it does not block further delivery of messages; - repeat the dispatching of
InboxMessage
in a synchronous manner.
Alternatively, end-users may implement their own way of handling the reception failure.
The corresponding functionality is provided via the API of
DeliveryMonitor
:public final class MarkFailureDelivered extends DeliveryMonitor { /** * In case the reception of the {@code InboxMessage} failed, * mark it as {@code DELIVERED} anyway. */ @Override public FailedReception.Action onReceptionFailure(FailedReception reception) { //// Error details are available as well: // InboxMessage msg = reception.message(); // Error error = reception.error(); // notifyOf(msg, error); return reception.markDelivered(); } } // ... // Plugging the monitor into the Delivery: DeliveryMonitor monitor = new MarkFailureDelivered(); Delivery delivery = Delivery.newBuilder() .setMonitor(monitor) // ... .build(); ServerEnvironment .when(MyEnvironment.class) .use(delivery);
By default,
InboxMessages
are marked as DELIVERED in case of failure of their reception. - mark the
-
Prohibit calling
state()
from@Apply
-ers (#1501)It is now not possible to call
Aggregate.state()
from@Apply
-ers. Previously, it was possible, but as discovered from real-world cases, such a functionality is prone to logical errors. End-users must useAggregate.builder()
instead. -
Fix delivering signals to aggregate
Mirror
s in a multi-Bounded Context environment (#1502)Previously, when several Bounded Contexts had their
Aggregate
s “visible” (i.e. exposed viaMirror
), the delivery mechanism was confused with multipleMirror
entity types which technically were distinct, but at the same time had exactly the same Type URL. Such a use-cases led to failures whenAggregate
state on read-side is updated by the framework code.This release alters Type URLs, under which Mirror projections register themselves in
Delivery
. The new type URL value includes the name of the Bounded Context — which makes this type URL invalid in terms of type discovery, but addresses the issue. -
Importing domain events from 3rd-party contexts properly in multi-tenant environments (#1503)
Previously, in a multi-tenant application, the imported events were dispatched in a straightforward manner, without specifying the
TenantId
in the dispatching context. Now, this issue is resolved. -
Allow subscribers to receive a notification once an Entity stops matching the subscription criteria (#1504)
Starting this release, clients of gRPC Subscription API will start receiving updates once entities previously included into some subscription as matching, are modified and no longer pass the subscription criteria.
In particular, this will always be the case if an Entity becomes archived or deleted.
The new endpoint is available for Spine client under
whenNoLongerMatching()
DSL, and is a part of Client’s request API:Client client = client(); client /* ... */ .subscribeTo(Task.class) .observe((task) -> { /* ... */ }) .whenNoLongerMatching(TaskId.class, (idOfNonMatchingEntity) -> { /* ... */}) .post();
-
More granularity into
Shard
pick-up results (#1505)In this release we start to distinguish the shard pick-up results. In particular, it is now possible to find out the reason of an unsuccessful shard pick-up. In particular, there may be some runtime issues, or a shard may already be picked-up by another worker.
Two new API endpoints were added to the
DeliveryMonitor
to provide end-users with some control over such cases:-
FailedPickUp.Action onShardAlreadyPicked(AlreadyPickedUp failure)
Invoked if the shared is already picked by another worker. The callback provides some insights into the pick-up failure, such as ID of the worker currently holding the shard, and
Timestamp
of the moment when the shard was picked by it.It is also required to return an action to take in relation to this case. By default, an action silently accepting this scenario is returned. End-users may implement their own means, e.g. retrying the pick-up attempt:
final class MyDeliveryMonitor extends DeliveryMonitor { ... @Override public FailedPickUp.Action onShardAlreadyPicked(AlreadyPickedUp failure) { return failure.retry(); } ... }
-
FailedPickUp.Action onShardPickUpFailure(RuntimeFailure failure)
This method is invoked if the shard could not be picked for some runtime technical reason. This method receives the
ShardIndex
of the shard that could not be picked, and the instance of the occurredException
. It also requires to return an action to handle this case. By default, such failures are just rethrown asRuntimeException
s, but end-users may choose to retry the pick-up:final class MyDeliveryMonitor extends DeliveryMonitor { ... @Override public FailedPickUp.Action onShardPickUpFailure(RuntimeFailure failure) { return failure.retry(); } ... }
-
-
A built-in
Sample
type providing the generation of sample Proto messages was improved in relation to generation more humaneString
values (#1506)
Google Cloud Java
-
Support of the shard pick-up result distinguishing introduced in
core-java
(#181).See core-java#1505 for more detail.
-
Updates of third-party dependencies:
- Cloud Datastore is now
2.14.2
, - Cloud Pub/Sub is used at
1.105.8
, - Cloud Trace version becomes
2.14.0
.
- Cloud Datastore is now
JDBC Storage
-
Widen the database-specific data type for
InboxId
and other IDs (#168)VARCHAR(512)
is now to store identifiers (turned into strings) for all known Spine-specific tables. -
Improve actual SQL-based querying when running against
MySQL
(#169)SQL queries generated via Querydsl are optimised, when the underlying RDBMS is MySQL.
In particular:
- updates of single Entities or single messages (such as
InboxMessage
) were optimized for MySQL by usingINSERT ... ON DUPLICATE KEY UPDATE …
clause, - MySQL-specific queries are only used if the JDBC driver contains
"mysql"
(with the case ignored) in its FQN, - previously available functionality on trimming the
Aggregate
storage was implemented properly, as its execution against real MySQL server led to SQL errors earlier.
- updates of single Entities or single messages (such as
-
ORDER BY
validation (#169)The
ORDER BY
column names are now checked for correctness prior to using them in a query, addressing #160. -
Proper
Inbox
pagination.Inbox
contents are now properly paginated, addressing #136. -
Third-party dependency updates:
- Querydsl is now at
5.0.0
.
- Querydsl is now at
Web
-
Bulk keep-up and cancellation requests for subscriptions (#193)
This feature was previously a hot-update for Spine 1.7.4. Now, the functionality is fully ported to
v1
branch. -
HttpClient
customisation (#194)It is now possible to customise
HttpClient
for each type of requests (sending commands, queries, or interacting with subscriptions).In particular, end-users are now able to set HTTP headers, request mode, or customise how the original Proto message is transformed for transmission over the wire.
Additionally, a new
HttpResponseHandler
routine has been extracted from the existing code to allow its customisation for end-users. It is responsible for transforming the raw response content into JS objects. The default implementation — what we used to have in previous versions as a hard-coded behaviour — expects the server-side to return a JSON string, and parses it into a JS object. -
Immediate subscription cancellation (#195)
Calling unsubscribe for any subscription now leads to an immediate cancellation on both client- and server-side. Previously, all such cancellations were only processed during the next “keep-up” propagation.
Client now allows cancelling all known subscription via
cancelAllSubscriptions()
no-args call. Its invocation leads to sending the bulk cancellation request to server-side, as well as shutting down the client-level subscriptions. Such an API is useful in case end-users choose to log out from the application. -
Observing Entity deletions at client-side (#197)
Previously
itemRemoved
callback for subscriptions was not functioning properly. Now, it is invoked whenever a target Entity becomes archived or deleted. -
Updated third-party dependencies:
- Firebase Admin SDK (server-side) to
9.1.1
, - Firebase JS SDK (client-side) is now
9.16.0
, - Guava is used at
31.1-jre
, - Google’s repackaged Apache HTTP Client is now at version 2 (v1 is now deprecated) and its version is
1.42.2
.
- Firebase Admin SDK (server-side) to
Dart
-
Customization of communication with back-end
HttpTranslator
is introduced to allow customization of HTTP request execution. In particular, it makes possible to define the HTTP headers of the requests sent to back-end, as well as translate the back-end responses.Here is how to supply a custom
HttpTranslator
:import 'package:spine_client/http_client.dart'; class CustomHttpTranslator extends HttpTranslator { // ... @override Map<String, String> headers(Uri uri) => {'Custom-Header': 'foo-bar'}; } // ... var clients = Clients(backendUrl, // ... httpTranslator: CustomHttpTranslator(...));
If the translator is not customized, the behavior is the same as in previous Dart client releases:
application/x-protobuf
content-type set for all requests,- HTTP request body is a Proto message transformed to bytes, and encoded with Base64,
- HTTP responses are treated as JSON strings, and parsed accordingly.
-
Subscription to entity deletions and archivals
The recent changes in Spine web back-end made possible to receive the subscription updates whenever the matching entity is archived or deleted. Previously, this functionality was still supported by Dart client API, but in fact it was not functioning properly.
-
A word on streams in subscriptions
The subscription API is build around Dart streams. At the moment, only broadcast streams are supported by Dart client. While it is possible to use Dart built-in mechanisms to perform the conversion of single-subscription streams into broadcast streams, it cannot be performed with proper performance and behavior generally. Therefore, any custom implementations of
FirebaseClient
should return broadcast streams, performing the conversion internally if needed. -
Exposing more API
Some API which was previously inaccessible to end-users is now available. In particular:
json.dart
— the conversion of Proto messages from JSON strings into objects;known_types.dart
— a registry of all Proto types known to the library; required in order to create Proto messages via parsing.
These files were moved from
lib/src
tolib
folder. -
Dart dependency updates
This release removes the dependency onto Firebase client implementation which is specific to web.
Also, this release specifies the
dartdoc
dependency as^5.0.0
to support null safety.
Other Libraries
Time library and Bootstrap plugin were updated to support the changes in other components of Spine 1.9.0.