What are the benefits of ScalaFX besides adding syntactic sugar for bindings?

I am trying to implement a simple note organizer with some mind mapping functionality using JavaFX and Scala.

I am trying to decide whether I should call JavaFX code directly from Scala or via ScalaFX ? I don’t know if it is worth to learn ScalaFX and would it not be just simpler to directly call JavaFX from Scala code?

The official ScalaFX site mentions 4 benefits of ScalaFX:

1) Natural Language Bind Expressions

-It’s nice but I don’t really plan using bindings that much (I intend to use EventBus for inter–gui-component events and a few bindings for intra-gui-component events).

2) Tailored Animation Syntax

-I don’t plan to use animations in my project.

3) Full Type-Safe APIs

This may seem like an insignificant point… Type safety is something
that Java developers have always had (and often take for granted), and
developers in other scripting languages live without (and unknowingly
suffer with runtime errors as a result). However, it is a critical
feature if you are developing applications that cannot have unexpected
runtime errors and bugs after deployment.

A good compiler will be able to pick up many common coding mistakes
through comparison of expected and actual types, and a great compiler
(like Scala) will automatically infer types for you so you don’t have
to tediouisly repeat them throughout your code.

ScalaFX gets the best of both worlds with a scripting-like DSL syntax
where you can rarely have to explicitly type objects, with the strong
type-safety of the Scala compiler that will infer and check the types
of every expression and API call. This means less time spent debugging
weird code bugs and misspellings, and higher quality code right out of
the gate!

-This seems interesting ! But my question is: I suspect that calling JavaFX directly
from Scala gives me the same type safety guarantees as calling JavaFX via ScalaFX, or not ? I don’t know.

4) Seamless JavaFX/ScalaFX interoperability:

-If I call JavaFX directly from Scala then I don’t have to worry more about interoperability issues than when calling JavaFX via ScalaFX.

In summary:

It seems that point 3 is the only one that might give me some benefit
that I care about in my simple project but I just don’t know what kind of type safety they are really talking about ?

Why is it better to call JavaFX via ScalaFX than directly from Scala with respect to type safety ?
What kind of additional type safety benefits do we get if we use ScalaFX instead of direct access from Scala ? I am asking this because I cannot really imagine what kind of additional type safety ScalaFX could give ?

So, in other words, I understand that ScalaFX is a nice syntactic sugar for bindings but does it offer more than that ? Should I really use it if I can live without the (very nice) syntac sugar it gives?

Is there something else than the sugar that would make it worth using this wrapper layer (ScalaFX) which introduces extra complexity (and source of bugs) ?

Please note that I really appreciate the work of ScalaFX’s creators ! I am only asking these questions to be able to make a better informed decision.


ScalaFX is a DSL for working with JavaFX. Offering, what you call “syntactic sugar” is the main purpose of a DSL.

(A DSL traditionally also limits the scope of the host language to the target domain, but that is usually not desired of Scala DSLs.)

Now, how and when that is useful, is a debate in its own right, but that is essentailly all that it is offering.
Personally ,I would always prefer an API that lets me communicate my intent more clearly to my peers and my future self, but that is something, that every team and project has to decide for itself.
The binding syntax of ScalaFX is wonderful because Properties and Bindings are finding their way into more and more backends (even without a JavaFX GUI).

The reason why ScalaFX is advertising type-safety, I think, is not because it is a special feature of ScalaFX itself, but because it is noteworthy that using such a consice, script-like language such as ScalaFX and leveraging the power of the platform that is Scala, will still give you type-safety (which might be counter-intuitive to newcomers and people unfamiliar with Scala).

I would recommend using ScalaFX in your case ,as it sounds you are working on a small project
, which mainly is focused on the user experience delivered through a JavaFx GUI ( I assume given your description). ScalaFX will allow you to iterate quickly on the GUI.

Don’t worry about overhead in the beginning, your use case will hardly be a performance-demanding app. If you do need to worry about performance, why are you using Scala ;)?

The biggest downside to ScalaFX is that every JavaFX type needs to be wrapped with a SFXDelegate, which is cumbersome if some type you need is not wrapped or in the future something gets added to JavaFX and you need to wait for ScalaFX to wrap it before you can use it ( both of those are no real blockers though as firstly, it is trivial to wrap a JavaFX type (see the ScalaFX Wiki), and secondly the release cycle of JavaFX is much slower than ScalaFX’s.


Equivalent of Ruby Hash in Java

I am really used at the following kind of code in Ruby:

my_hash = {}

my_hash['test'] = 1

What is the correspondent data structure in Java?


HashMap<String, Integer> map = new HashMap<>();
map.put("test", 1);

I assume?


JavaScript print all used Unicode characters

I am trying to make JavaScript print all Unicode characters. According to my research, there are 1,114,112 Unicode characters.

A script like the following could work:

for(i = 0; i < 1114112; i++) 

But I found out that only 10% of the 1,114,112 Unicode characters are used.

How can I can I only print the used unicode characters?


As Jukka said, JavaScript has no built-in way of knowing whether a given Unicode code point has been assigned a symbol yet or not.

There is still a way to do what you want, though.

I’ve written several scripts that parse the Unicode database and create separate data files for each category, property, script, block, etc. in Unicode. I’ve also created an HTTP API that allows you to programmatically get all code points (i.e. an array of numbers) in a given Unicode category, or all symbols (i.e. an array of strings for each character) with a given Unicode property, or a regular expression with that matches any symbols in a certain Unicode script.

For example, to get an array of strings that contains one item for each Unicode code point that has been assigned a symbol in Unicode v6.3.0, you could use the following URL:

Note that you can prepend and append anything you like to the output by tweaking the URL parameters, to make it easier to reuse the data in your own scripts. An example HTML page that console.log()s all these symbols, as you requested, could be written as follows:

<!DOCTYPE html>
<meta charset="utf-8">
<title>All assigned Unicode v6.3.0 symbols</title>
<script src=""></script>
  window.symbols.forEach(function(symbol) {
    // Do what you want to do with `symbol` here, e.g.

Demo. Note that since this is a lot of data, you can expect your DevTools console to become slow when opening this page.

Update: Nowadays, you should use Unicode data packages such as unicode-11.0.0 instead. In Node.js, you can then do the following:

const symbols = require('unicode-11.0.0/Binary_Property/Assigned/symbols.js');

// Or, to get the code points:

// Or, to get a regular expression that only matches these characters:

Gradle: optimize running tests in parallel

I am experimenting with Gradle’s capability for running tests in parallel. The main setting I have found is the maxParallelForks property of Test tasks. I expected the behavior of that setting to be similar to having a Executors.newFixedThreadPool to execute the tests. Namely, a fixed number of threads (processes in the case of Gradle) are executing concurrently; whenever one thread finishes the work, a new one is activated in the pool.

However, the behavior of Gradle is fundamentally different in a less optimal way. It looks like that Gradle divides the test classes into a number equal to maxParallelForks of groups, and then Gradle spawns a process for each group and let those processes execute in parallel. The problem with this strategy is obvious: it cannot dynamically adjust the execution based on the time needed by a test class.

For example, suppose you have 5 classes and maxParallelForks is set to 2. Among the five classes, there is a slow one and the rest are relatively quick. An ideal strategy would be let one process execute the slow one and the other process the quick ones. However, what Gradle does is group the slow one together with one or two quick ones and spawn two processes to execute two groups of classes, which is certainly less optimal than the ideal case.

Here is a simple demo.

A slow class:

class DemoTest {
    void one() {
        Thread.sleep( 5000 )
        println System.getProperty('org.gradle.test.worker') + ": " + new Date().format('HH:mm:ss')
        assert 1 == 1

    void two() {
        Thread.sleep( 5000 )
        println System.getProperty('org.gradle.test.worker') + ": " + new Date().format('HH:mm:ss')
        assert 1 == 1

Quick classes (DemoTest2-4, with identical class body):

class DemoTest2 {
    void one() {
        Thread.sleep( 1000 )
        println System.getProperty('org.gradle.test.worker') + ": " + new Date().format('HH:mm:ss')
        assert 1 == 1

    void two() {
        Thread.sleep( 1000 )
        println System.getProperty('org.gradle.test.worker') + ": " + new Date().format('HH:mm:ss')
        assert 1 == 1

All the classes are in package junit, which happens to be the same name as a famous test framework 🙂

Here is a possible output:

junit.DemoTest2 > one STANDARD_OUT
    2: 14:54:00

junit.DemoTest2 > two STANDARD_OUT
    2: 14:54:01

junit.DemoTest4 > one STANDARD_OUT
    2: 14:54:02

junit.DemoTest4 > two STANDARD_OUT
    2: 14:54:03

junit.DemoTest > one STANDARD_OUT
    3: 14:54:04

junit.DemoTest > two STANDARD_OUT
    3: 14:54:09

junit.DemoTest3 > one STANDARD_OUT
    3: 14:54:10

junit.DemoTest3 > two STANDARD_OUT
    3: 14:54:11

junit.DemoTest5 > one STANDARD_OUT
    3: 14:54:12

junit.DemoTest5 > two STANDARD_OUT
    3: 14:54:13

As you can see, the slow class DemoTest is grouped with two quick classes. The total run time is about 13 seconds, which could have been 10 seconds, if the quick classes were grouped together.

So, is there any straightforward way to optimize this behavior in Gradle without resorting to a custom JUnit runner?

Thank you very much.


This can only be optimized by making changes to the Gradle codebase.


Visitor pattern with Java 8 default methods

Visitor pattern (double dispatch) is a very useful pattern in its own rights, but it has often been scrutinized of breaking interfaces if any new member is added to the inheritance hierarchy, which is a valid point.

But after the introduction of default methods in Java 8, now that we can define default implementation in interfaces, the client interfaces will not break and clients can gracefully adopt the changed interface as appropriate.

interface Visitor{
   public void visit(Type1 type);
   public void visit(Type2 type);

   //added after the first version of visitor is released
   default public void visit(NewType type){
        //some default implementation

Now with default methods no more breakage of client code if new type NewType is introduced in future.

Does this make Visitor more adoptable and useful?


Your question contains the implicit assertion that a Visitor has to be an interface. Since the Visitor pattern is not Java specific, it does not mandate such an implementation.

In fact, there are a lot of uses around the world using an abstract class for the Visitor or using an interface but providing an abstract implementation class at the same time.

While this comment has a valid point of the possibility to detect unhandled cases at compile time this applies only to the case where every visitor always has to provide implementations for every visit method. This can be quite a code bloat when you have a lot of cases (And may cause other developers to write their own abstract base class for their visitors).

As said, not everyone uses the Visitor pattern this way. A lot of implementations use abstract classes for providing empty visit methods or visit methods which delegate to another visit method taking a more abstract type. For these implementations, adding a new type never was an issue.

And, to answer your question, when using the Visitor pattern in a way not forcing every Visitor to provide an implementation for every method, using default methods in interfaces is an option. But it does not make the Visitor pattern “more adoptable and useful” as there never was a real problem with it. The option to use an abstract visitor class always existed.

Source: stackoverflow
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Privacy Policy, and Copyright Policy. Content is available under CC BY-SA 3.0 unless otherwise noted. The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 © No Copyrights, All Questions are retrived from public domain..