One of the core features of Xtext is the ability, to create a semantic model from a set of input files. While loading the resource, Xtext transforms the parsed AST into a domain model and trys to cross link the model elements.
From an information theory point of view this cannot be solved generally for every imaginable language, but we try to provide reasonable defaults with Xtext. A quick start with a language, that fulfils common usecases, should be as easy as possible. So we based the default linking on some kind of heuristics: Most likely, one would reference a model element by some kind of name, so we look for name features in the model elements and try to match them with the input text for a reference.
Lets consider a small example language for clarification:
The following input file would certainly match the given concrete syntax:
If we take a look at the lines two and three, we notice two objects in the container, that have the same name, but different indexes. The default linking implementation that ships with the M3 Milestone of TMF Xtext would try to link both 'Obj2' for the reference in line one. This seems to be perfectly ok, because our objects can refer to many other objects (note the '+=' notation in the concrete syntax of our language).
But in many cases, the user would expect, that the reference points to a unique object, even if the metamodel allows multiple values for a feature.
So how can we identifiy these ambiguities without implementing the whole linking service ourself? The simple answer is: In TMF Xtext M3, we cannot. But there is a workaround.
What we have to do is to slightly modify our grammar and therefore derive a different metamodel:
The refined 'SampleLang' uses a combination of containment and single value references instead of multi value references. At a first glance, we have greater efforts when writing functions that use our models. So what is the benefit? The default linking implementation works out of the box and implementing constraints on top of that to check our new kind of multi value references is pretty straight forward.
But whats the good news? This workaround does its job. And even better: It will not be required in TMF Xtext M4 which will be released at the end of the year. We thought again about linking and will come up with another default implementation, that is suitable for many more cases, which cannot be handled by the default language services in M3.
Sunday, November 30, 2008
Wednesday, November 12, 2008
Optimistic Locking Revised
When you implement concurrency control in a business application, you have to decide if you use pessimistic or optimistic locking. For some types of applications optimistic locking is sufficient, because in the most common cases users would modify different resources, thus only a few of them would rant about lost changes. If you use optimistic locking, subsequent conflicting changes will be denied and the user has to update his view on the data and redo his modifications prior to saving the resources.
Basically there exist two ways to implement optimistic locking. Both of them have advantages and disadvantages.
If you implement optimistic locking with on of these patterns, you have to check against the number of modified records in your code, when you execute such a query. In pseudo code with a java like syntax this would look like the following snippet:
Neither the first nor the second obvious implementation is satisfying because of the mentioned drawbacks. Let's try to combine the advantages of both worlds at the expense of a slightly more complex pattern for update statements:
This implementation reduces the risk for wrongly refused updates to a minimum. It even allows special treatment of blob fields, that cannot be compared directly. Additionally you have the chance, to define special semantics for field groups, e.g. if any value in a group has been modified, the whole update can be denied, even if the concrete field that changed was not previously edited by another user.
I don't know about any implementation of this idea, but I am very interested in experience from the real world. Especially reports about performance differences and the influence on the overall usability are welcome.
Basically there exist two ways to implement optimistic locking. Both of them have advantages and disadvantages.
- One solution is to check the original state of every modified property against the current value in the database and only update the affected columns. This implies, that almost every update statement is a custom statement. You cannot use prepared statements to perform batch updates. This can be a major performance drawback.
The statements would look like this: - The second possibility is to introduce some kind of update counter in your table. Before you update a record in the database, the current value of the update counter is checked against the base value, that was read by the client. If both values are equal, the update is performed and the counter increased. The update counter is either an integer or a timestamp.
This is a typical example for an update statement using an update counter:
The obvious advantage of this technique is the possibility to use prepared statements because the structure and the parameter list is the same for any given modified record in a table. But the chance is, that modifications are refused, even if they are not conflicting. The new data could be the same as in the persistet record, or only distinct properties could have been modified compared to the common base version. In both cases this solution would be quite frustrating or at least confusing from a users point of view.
If you implement optimistic locking with on of these patterns, you have to check against the number of modified records in your code, when you execute such a query. In pseudo code with a java like syntax this would look like the following snippet:
Neither the first nor the second obvious implementation is satisfying because of the mentioned drawbacks. Let's try to combine the advantages of both worlds at the expense of a slightly more complex pattern for update statements:
This implementation reduces the risk for wrongly refused updates to a minimum. It even allows special treatment of blob fields, that cannot be compared directly. Additionally you have the chance, to define special semantics for field groups, e.g. if any value in a group has been modified, the whole update can be denied, even if the concrete field that changed was not previously edited by another user.
I don't know about any implementation of this idea, but I am very interested in experience from the real world. Especially reports about performance differences and the influence on the overall usability are welcome.
Friday, November 7, 2008
Here we go ...
This is the mandatory first blog post in and about this blog. On an irregular basis I will write about random topics like software development and abritrary banalities. Stay tuned.
Subscribe to:
Posts (Atom)