Categories
discuss

Arrays.sort (with Comparator) – same or different thread?

Is the comparator code in the Arrays.sort method called in the same thread as
the call to sort or a different thread?

I am asking this in the context of JDK 8.

I think the answer is that it’s called in the same thread but I am not 100% sure. I would be glad if the person answering this question provides some references or some other kind of detailed explanation (other than simple Yes or No).

Answer

The answer is no. Sorting (in Arrays.sort) is implemented with DualPivotQuicksort, from the docs:

This class implements the Dual-Pivot Quicksort algorithm by Vladimir
Yaroslavskiy, Jon Bentley, and Josh Bloch. The algorithm offers O(n
log(n)) performance on many data sets that cause other quicksorts to
degrade to quadratic performance, and is typically faster than
traditional (one-pivot) Quicksort implementations. All exposed methods
are package-private, designed to be invoked from public methods (in
class Arrays) after performing any necessary array bounds checks and
expanding parameters into the required forms.

and as you can see in the implementation – it doesn’t spin up any threads.

Further, there are parallelSort methods which use the ForkJoin common pool in order to perform parallel execution. This is very explicit and as some of the other commenters mentioned already – the chances of the JDK API to be vague regards such an issue are very slim.

Categories
discuss

ES6 exports/imports use-case, compared to traditional namespacing

I don’t understand WHY and in what scenario this would be used..

My current web setup consists of lots of components, which are just functions or factory functions, each in their own file, and each function “rides” the app namespace, like : app.component.breadcrumbs = function(){... and so on.

Then GULP just combines all the files, and I end up with a single file, so a page controller (each “page” has a controller which loads the components the page needs) can just load it’s components, like: app.component.breadcrumbs(data).

All the components can be easily accessed on demand, and the single javascript file is well cached and everything. This way of work seems extremely good, never saw any problem with this way of work. of course, this can (and is) be scaled nicely.

So how are ES6 imports for functions any better than what I described?

what’s the deal with importing functions instead of just attaching them to the App’s namespace? it makes much more sense for them to be “attached”.

Files structure

/dist/app.js                     // web app namespace and so on
/dist/components/breadcrumbs.js  // some component
/dist/components/header.js       // some component
/dist/components/sidemenu.js     // some component
/dist/pages/homepage.js          // home page controller

// GULP concat all above to
/js/app.js // this file is what is downloaded

Then inside homepage.js it can look like this:

app.routes.homepage = function(){
    "use strict";
    var DOM = { page : $('#page') };

    // append whatever components I want to this page
    DOM.page.append(
        app.component.header(),
        app.component.sidemenu(),
        app.component.breadcrumbs({a:1, b:2, c:3})
    )
};

This is an extremely simplified code example but you get the point

Answer

Answers to this are probably a little subjective, but I’m going to do my best.

At the end of the day, both methods allow support creating a namespace for a piece of functionality so that it does not conflict with other things. Both work, but in my view, modules, ES6 or any other, provide a few extra benefits.

Explicit dependencies

Your example seems very bias toward a “load everything” approach, but you’ll generally find that to be uncommon. If your components/header.js needs to use components/breadcrumbs.js, assumptions must be made. Has that file been bundled into the overall JS file? You have no way of knowing. You’re two options are

  1. Load everything
  2. Maintain a file somewhere that explicitly lists what needs to be loaded.

The first option is easy and in the short term is probably fine. The second is complicated for maintainability because it would be maintained as an external list, it would be very easy to stop needing one of your component file but forget to remove it.

It also means that you are essentially defining your own syntax for dependencies when again, one has now been defined in the language/community.

What happens when you want to start splitting your application into pieces? Say you have an application that is a single large file that drives 5 pages on your site, because they started out simple and it wasn’t big enough to matter. Now the application has grown and should be served with a separate JS file per-page. You have now lost the ability to use option #1, and some poor soul would need to build this new list of dependencies for each end file.

What if you start using a file in a new places? How do you know which JS target files actually need it? What if you have twenty target files?

What if you have a library of components that are used across your whole company, and one of they starts relying on something new? How would that information be propagated to any number of the developers using these?

Modules allow you to know with 100% certainty what is used where, with automated tooling. You only need to package the files you actually use.

Ordering

Related to dependency listing is dependency ordering. If your library needs to create a special subclass of your header.js component, you are no longer only accessing app.component.header() from app.routes.homepage(), which would presumable be running at DOMContentLoaded. Instead you need to access it during the initial application execution. Simple concatenation offers no guarantees that it will have run yet. If you are concatenating alphabetically and your new things is app.component.blueHeader() then it would fail.

This applies to anything that you might want to do immediately at execution time. If you have a module that immediately looks at the page when it runs, or sends an AJAX request or anything, what if it depends on some library to do that?

This is another argument agains #1 (Load everything) so you start having to maintain a list again. That list is again going to be a custom things you’ll have come up with instead of a standardized system.

How do you train new employees to use all of this custom stuff you’ve built?

Modules execute files in order based on their dependencies, so you know for sure that the stuff you depend on will have executed and will be available.

Scoping

Your solution treats everything as a standard script file. That’s fine, but it means that you need to be extremely careful to not accidentally create global variables by placing them in the top-level scope of a file. This can be solved by manually adding (function(){ ... })(); around file content, but again, it’s one more things you need to know to do instead of having it provided for you by the language.

Conflicts

app.component.* is something you’ve chosen, but there is nothing special about it, and it is global. What if you wanted to pull in a new library from Github for instance, and it also used that same name? Do you refactor your whole application to avoid conflicts?

What if you need to load two versions of a library? That has obvious downsides if it’s big, but there are plenty of cases where you’ll still want to trade big for non-functional. If you rely on a global object, it is now up to that library to make sure it also exposes an API like jQuery’s noConflict. What if it doesn’t? Do you have to add it yourself?

Encouraging smaller modules

This one may be more debatable, but I’ve certainly observed it within my own codebase. With modules, and the lack of boilerplate necessary to write modular code with them, developers are encouraged to look closely on how things get grouped. It is very easy to end up making “utils” files that are giant bags of functions thousands of lines long because it is easier to add to an existing file that it is to make a new one.

Dependency webs

Having explicit imports and exports makes it very clear what depends on what, which is great, but the side-effect of that is that it is much easier to think critically about dependencies. If you have a giant file with 100 helper functions, that means that if any one of those helpers needs to depend on something from another file, it needs to be loaded, even if nothing is ever using that helper function at the moment. This can easily lead to a large web of unclear dependencies, and being aware of dependencies is a huge step toward thwarting that.

Standardization

There is a lot to be said for standardization. The JavaScript community has moved heavily in the direction of reusable modules. This means that if you hope into a new codebase, you don’t need to start off by figuring out how things relate to eachother. Your first step, at least in the long run, won’t be to wonder whether something is AMD, CommonJS, System.register or what. By having a syntax in the language, it’s one less decision to have to make.

The long and short of it is, modules offer a standard way for code to interoperate, whether that be your own code, or third-party code.

Your current process is to concatenate everything always into a single large file, only ever execute things after the whole file has loaded and you have 100% control over all code that you are executing, then you’ve essentially defined your own module specification based on your own assumptions about your specific codebase. That is totally fine, and no-one is forcing you to change that.

No such assumptions can be made for the general case of JavaScript code however. It is precisely the objective of modules to provide a standard in such a way as to not break existing code, but to also provide the community with a way forward. What modules offer is another approach to that, which is one that is standardized, and one that offers clearer paths for interoperability between your own code and third-party code.

Categories
discuss

create an ES6 class with Array like behaviour

I am trying to create a class that behaves a bit like an array. There are two things that I would like to have:

  • has to be iterable
  • should allow for property accessing via [index] where index is an integer

Making a class iterable is fairly easy:

class MyList {
    constructor() {
        this._list = [1, 2, 3];
    }
    [Symbol.iterator]() {
        return this._list.values();
    }
}

The above allows an instance of the class to be iterated over:

let myList = new MyList();
for (let item of myList) {
    console.log(item); // prints out 1, 2, 3
}

Figuring out how to implement the second requirement turns out it’s not as easy and the only think I found would be to extend Array. But this means that I would have to override most of the methods inherited from Array as I would need those methods to do something else than the built in behaviour.

Is there a way to achieve what I am asking? If so, what would be the best approach to do it?

Answer

Turns out you can store properties under integer-like string keys, e. g. foo['0'] = 'bar' and access them with integers, e. g. foo[0] // => bar. Assigning with an integer also works. Thanks to @JMM for pointing this stuff out.

Thus, the solution is as simple as:

class Foo {
  constructor (...args) {
    for (let i = 0; i < args.length; i++) {
      this[i] = args[i];
    }
  }

  [Symbol.iterator]() {
    return Object
      .keys(this)
      .map(key => this[key])
      .values();
  }
}

const foo = new Foo('a', 'b', 'c');

for (let item of foo) {
  console.log(item); // prints out a, b, c
}

console.log(foo[1]); // prints out b

Demo.

Categories
discuss

Method to cast Object to JSONObject or JSONArray depending on the Object

I have been trying a method like this but I can’t find any solution:

public static JSONObject or JSONArray objectToJSON(Object object){
    if(object is a JSONObject)
        return new JSONObject(object)
    if(object is a JSONArray)
        return new JSONArray(object)
}

I have tried this:

public static JSONObject objectToJSONObject(Object object){
    Object json = null;
    try {
        json = new JSONTokener(object.toString()).nextValue();
    } catch (JSONException e) {
        e.printStackTrace();
    }
    JSONObject jsonObject = (JSONObject)json;
    return jsonObject;
}

public static JSONArray objectToJSONArray(Object object){
    Object json = null;
    try {
        json = new JSONTokener(object.toString()).nextValue();
    } catch (JSONException e) {
        e.printStackTrace();
    }
    JSONArray jsonObject = (JSONArray)json;
    return jsonObject;
}

But then when I invoke objectToJSONArray(object) I put a JSONObject it crashes casting. So I want a generic method. Someone find any solution?

Answer

I assume you’ve seen this question. You can probably just add a check of the type using instanceof before you return from each method, and return null if the Object is not of the type expected. That should get rid of the ClassCastException.

Example:

public static JSONObject objectToJSONObject(Object object){
    Object json = null;
    JSONObject jsonObject = null;
    try {
        json = new JSONTokener(object.toString()).nextValue();
    } catch (JSONException e) {
        e.printStackTrace();
    }
    if (json instanceof JSONObject) {
        jsonObject = (JSONObject) json;
    }
    return jsonObject;
}

public static JSONArray objectToJSONArray(Object object){
    Object json = null;
    JSONArray jsonArray = null;
    try {
        json = new JSONTokener(object.toString()).nextValue();
    } catch (JSONException e) {
        e.printStackTrace();
    }
    if (json instanceof JSONArray) {
        jsonArray = (JSONArray) json;
    }
    return jsonArray;
}

Then, you can try both methods, and use the return value of the one that doesn’t return null, something like this:

public void processJSON(Object obj){
    JSONObject jsonObj = null;
    JSONArray jsonArr = null;
    jsonObj = objectToJSONObject(obj);
    jsonArr = objectToJSONArray(obj);
    if (jsonObj != null) {
        //process JSONObject
    } else if (jsonArr != null) {
        //process JSONArray
    }
}
Categories
discuss

Warning the selected directory is not a valid tomcat home

I installed Tomcat with home brew brew install tomcat. On Selecting Tomcat server to add a server to the application server I get this

Warning the selected directory is not a valid tomcat home.

However, running catalina start starts the tomcat server on my terminal.

Answer

Step 1.

Download the tomcat with the tar.gz extension.
NB. The tar.gz extention

Step 2.

Unzip the file and make sure the folder name remain tomcat, save it to your library.

Step 3.

Access the preference settings on your intellijIDEA

  • Under Build, Execution and Deployment, select application server.
  • Attempt to add a new server, click + and select Tomcat Server from the drop down.
  • Click on the ... elipse to select the folder you have unziped earlier into your library.
  • select the tomcat folder and boom.

You are good to go, ready for use.

Source: stackoverflow
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Privacy Policy, and Copyright Policy. Content is available under CC BY-SA 3.0 unless otherwise noted. The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 © No Copyrights, All Questions are retrived from public domain..