Skip to main content

What is the reason why “synchronized” is not allowed in Java 8 interface methods?

This was a deliberate decision, rather than an omission (as has been suggested elsewhere.) While at first it might seem obvious that one would want to support the synchronized modifier on default methods, it turns out that doing so would be dangerous, and so was prohibited.
Synchronized methods are a shorthand for a method which behaves as if the entire body is enclosed in a synchronized block whose lock object is the receiver. It might seem sensible to extend this semantics to default methods as well; after all, they are instance methods with a receiver too. (Note that synchronized methods are entirely a syntactic optimization; they're not needed, they're just more compact than the corresponding synchronized block. There's a reasonable argument to be made that this was a premature syntactic optimization in the first place, and that synchronized methods cause more problems than they solve, but that ship sailed a long time ago.)
So, why are they dangerous? Synchronization is about locking. Locking is about coordinating shared access to mutable state. Each object should have a synchronization policy that determines which locks guard which state variables. (See Java Concurrency in Practice, section 2.4.)
Many objects use as their synchronization policy the Java Monitor Pattern (JCiP 4.1), in which an object's state is guarded by its intrinsic lock. There is nothing magic or special about this pattern, but it is convenient, and the use of the synchronized keyword on methods implicitly assumes this pattern.
It is the class that owns the state that gets to determine that object's synchronization policy. But interfaces do not own the state of the objects into which they are mixed in. So using a synchronized method in an interface assumes a particular synchronization policy, but one which you have no reasonable basis for assuming, so it might well be the case that the use of synchronization provides no additional thread safety whatsoever (you might be synchronizing on the wrong lock). This would give you the false sense of confidence that you have done something about thread safety, and no error message tells you that you're assuming the wrong synchronization policy.
It is already hard enough to consistently maintain a synchronization policy for a single source file; it is even harder to ensure that a subclass correctly adhere to the synchronization policy defined by its superclass. Trying to do so between such loosely coupled classes (an interface and the possibly many classes which implement it) would be nearly impossible and highly error-prone.
Given all those arguments against, what would be the argument for? It seems they're mostly about making interfaces behave more like traits. While this is an understandable desire, the design center for default methods is interface evolution, not "Traits--". Where the two could be consistently achieved, we strove to do so, but where one is in conflict with the other, we had to choose in favor of the primary design goal.

Comments

Popular posts from this blog

Patterns Knowledge

Anti Pattern: Its a pattern which we repeatedly do and which brings negative results. Architecture by implication: Systems lacking a clear and document architecture. Cover Your Assets: Continuing to document and present alternatives, without ever making an architectural decision. Witches Brew: Architectures made by groups resulting in a mix of ideas and lack a clear vision. Gold Plating: Continuing to define an architecture well pass the time which results in no benefits to the architecture. Vendor King: A product dependent architectures leading to a loss of control of architecture and development costs Big Bang Architecture: Designing the entire architecture at the beginning of the project when you know the least about the system.

Some good links

https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/ http://taligarsiel.com/ClientSidePerformance.html -- Client side performance tips https://ariya.io/ https://vertx.io/docs/ -- New exciting Framework, Must read. https://javaee.github.io/ -- Very good resource to see various javaee projects and explore enterprise architecture and design concepts. https://projects.eclipse.org/projects/ee4j -- Lots of interesting open source projects by eclipse http://openjdk.java.net/projects/mlvm/ -- the main project for supporting more dynamic languages to jvm. http://esprima.org/ -- EcmaScript parser http://c2.com/ppr/ and http://hillside.net/ -- Good place to learn patterns http://cr.openjdk.java.net/~briangoetz/lambda/Defender%20Methods%20v4.pdf https://validator.w3.org/nu/ -- This will validate your website css and js https://www.cellstream.com/intranet/reference-reading/faq/216-what-is-2-128.html http://shattered.io/ -- An example of SHA1 collision attack.

@MappedSuperclass vs. @Inheritance

MappedSuperClass must be used to inherit properties, associations, and methods. Entity inheritance must be used when you have an entity, and several sub-entities. You can tell if you need one or the other by answering this questions: is there some other entity in the model which could have an association with the base class? If yes, then the base class is in fact an entity, and you should use entity inheritance. If no, then the base class is in fact a class that contains attributes and methods that are common to several unrelated entities, and you should use a mapped superclass. For example: You can have several kinds of messages: SMS messages, email messages, or phone messages. And a person has a list of messages. You can also have a reminder linked to a message, regardless of the kind of message. In this case, Message is clearly an entity, and entity inheritance must be used. All your domain objects could have a creation date, modification date and ID, and you could thus