Java – Call to start method on thread : how does it route to Runnable interface’s run ()?

Ok , I know the two standard ways to create a new thread and run it in Java :

  1. Implement Runnable in a class, define run() method, and pass an instance of the class to a new Thread. When the start() method on the thread instance is called, the run method of the class instance will be invoked.

  2. Let the class derive from Thread, so it can to override the method run() and then when a new instance’s start() method is called, the call is routed to overridden method.

In both methods, basically a new Thread object is created and its start method invoked. However, while in the second method, the mechanism of the call being routed to the user defined run() method is very clear, (it’s a simple runtime polymorphism in play), I don’t understand how the call to start() method on the Thread object gets routed to run() method of the class implementing Runnable interface. Does the Thread class have an private field of Type Runnable which it checks first, and if it is set then invokes the run method if it set to an object? that would be a strange mechanism IMO.

How does the call to start() on a thread get routed to the run method of the Runnable interface implemented by the class whose object is passed as a parameter when constructing the thread?


The Thread keeps a reference to the Runnable instance, and calls it in the base implementation of run.

You can see this in the source:

// passed into the constructor and set in the init() method
private Runnable target;
// called from native thread code after start() is called
public void run() {
    if (target != null) {;

Logging from multiple apps/processes to a single log file

Our app servers (weblogic) all use log4j to log to the same file on a network share. On top of this we have all web apps in a managed server logging errors to a common error.log. I can’t imagine this is a good idea but wanted to hear from some pros. I know that each web app has its own classloader, so any thread synchronization only occurs within the app. So what happens when multiple processes start converging on a single log file? Can we expect interspersed log statements? Performance problems? What about multiple web apps logging to a common log file? The environment is Solaris.


This is generaly bad idea to have not synchronized write access to a file and certainly bad programming practice. The only case it might work is an append to a file on local machine – everybody just adds lines at the end of file.

But, since your file is on the network share, it will probably quickly turn into garbage. You didn’t tell which distributed filesystem you are using, but for NFS you can find following explanation on open(2) man page:

The file is opened in append mode. Before each write(), the file offset
is positioned at the end of the file,
as if with lseek(). O_APPEND may lead
to corrupted files on NFS file systems
if more than one process appends data
to a file at once. This is because NFS
does not support appending to a file,
so the client kernel has to simulate
it, which can’t be done without a race

Of course this is C, but since Java is implemented in C it cannot do any better than that (at least not with regard to system calls:-)).


calendar.getInstance() or calendar.clone()

I need to make a copy of a given date 100s of times (I cannot pass-by-reference). I am wondering which of the below two are better options




Performance is of main conern here.



I would use

newTime= (Calendar) originalDate.clone();

Android horizontal scrollview behave like iPhone (paging)

I have a LinearLayout inside a HorizontalScrollView. The content is just a image. While scrolling, I need to achieve the same behavior you get when setting the paging option on a the iPhone equivalent of the HSW (scrolling the list should stop at every page on the list, not continue moving).

How is this done in Android? Should I implement this features by myself or there is a particular property to set or a subclass of HSV to implement?


So, my solution is:

  1. Intercept the onTouch event and calculate whether the page should change to the next or keep on the current
  2. Inherit from HorizontalScrollView and override the method computeScroll

The method computeScroll the called to move the list. By default I suppose it’s implemented to decelerate with a certain ratio… Since I don’t want this motion, I just override it without specifing a body.

The code for the event handler is:

_scrollView.setOnTouchListener(new OnTouchListener() {
        public boolean onTouch(View v, MotionEvent event) {
            if(event.getAction() == MotionEvent.ACTION_UP)
                float currentPosition = _scrollView.getScrollX();
                float pagesCount = _horizontalBar.getChildCount();
                float pageLengthInPx = _horizontalBar.getMeasuredWidth()/pagesCount;
                float currentPage = currentPosition/pageLengthInPx;

                Boolean isBehindHalfScreen =  currentPage-(int)currentPage > 0.5;

                float edgePosition = 0;
                    edgePosition = (int)(currentPage+1)*pageLengthInPx;
                    edgePosition = (int)currentPage*pageLengthInPx;

                _scrollView.scrollTo((int)edgePosition, 0);

            return false;

And in my inherited HorizontalScrollView

    public void  computeScroll  (){

Changing coding style due to Android GC performance, how far is too far?

I keep hearing that Android applications should try to limit the number of objects created in order to reduce the workload on the garbage collector. It makes sense that you may not want to created massive numbers of objects to track on a limited memory footprint, for example on a traditional server application created 100,000 objects within a few seconds would not be unheard of.

The problem is how far should I take this? I’ve seen tons of examples of Android applications relying on static state in order supposedly “speed things up”. Does increasing the number of instances that need to be garbage collected from dozens to hundreds really make that big of a difference? I can imagine changing my coding style to now created hundreds of thousands of objects like you might have on a full-blown Java-EE server but relying on a bunch of static state to (supposedly) reduce the number of objects to be garbage collected seems odd.

How much is it really necessary to change your coding style in order to create performance Android apps?


The “avoid allocation” advice is usually with regard to game loops. The VM has to pause to collect garbage, and you don’t want that happening while your game is animating at 30fps. If you don’t allocate any objects, the VM won’t need to collect garbage to free memory. If you have a game that needs to run without user-visible hiccups, then you should consider changing the code in the relevant parts to minimize or eliminate allocation.

If you’re making an app that holds recipes or shows photos, I wouldn’t worry about it — the GC hiccup is not something the user will likely notice.

Future improvements to the Dalvik GC (e.g. generational collection) should make this less of an issue.

Source: stackoverflow
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Privacy Policy, and Copyright Policy. Content is available under CC BY-SA 3.0 unless otherwise noted. The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 © No Copyrights, All Questions are retrived from public domain..