How to display loading message when an iFrame is loading?

I have an iframe that loads a third party website which is extremely slow to load.

Is there a way I can display a loading message while that iframe loads the user doesn’t see a large blank space?

PS. Note that the iframe is for a third party website so I can’t modify / inject anything on their page.


I have done the following css approach:

<div class="holds-the-iframe"><iframe here></iframe></div>

.holds-the-iframe {
  background:url(../images/loader.gif) center center no-repeat;

Eclipse. Copy JAR file into project

How to include (NOT ONLY reference) JAR file into existing project? I added it by using Project properties – Build path – add external jar’s. But when I export my project, and then import it in another computer, this library was missing.


Drag it into your project view so it appears in the project as any other file.

Right-click the jar in the project view and add to build path.


Fully native apps on Android?

Is it possible to develop a native application that runs without relying on the Dalvik runtime and any of the Java libraries?

Basically, I want to make a native binary I can run via a shell that would be able to run without the system_server process running. Ideally, I want to be able to create my own Window Server by rendering stuff via the OpenGL system, instead of relying on SurfaceFlinger (which would also be dead due to the fact that system_server isn’t running).

The reason I’m asking this is that I want to experiment with lower level Android development in C/C++ at which Java is simply un-necessary. So basically, I’m trying to develop a standalone app that can render things via OpenGL+Cairo and receive HID input.

PS: I know what the NDK is, and it’s not what I’m looking for. I want to create standalone binaries instead of creating stuff that runs inside the Dalvik VM.


There are two possibilities to run native code on your device: either using NDK or embedding your application into the framework. As I understand the first approach is not considered, thus, I think you can have a look at the second. Here is an example how to implement the second approach.

An example of porting existing code to a custom Android device

It is due time for another technical post here on the blog. This post will be about porting existing c-libraries to Android, something I did as part of the dev board demos we are doing here at Enea.

Platform addition or NDK

There are two ways of bringing native code to an Android device, either add it to the platform itself and integrate it with the framework, or include it with an application package. The latter method have evolved a lot and with the release of NDK version 5 even allows you to hook directly into the application lifecycle from the NDK. The NDK is useful for any application where you have need of native performance, have portable C libriaries you want to reuse or just some legacy native code that could be included in your application. The NDK integrates well with the Android SDK and is a great way to include native functionality in your application. It should be the preferred way for any application that needs to be reusable across a lot of Android devices.

The other option is to include your functionality, it may be native or Java, as an API extension for all applications to use. This will only work on devices that implement these extensions and it may be a suitable option for device builders. This is the variant that we aim for here.

Analyze the existing project

Porting native code to Android is not always straight forward, especially if we are talking about C++ code due to the fact that Android uses its own c-runtime with limited support for exceptions among other things. If you want to know more about the details of bionic there is an overview in the NDK docs.

The code I wanted to port for this project was the Enea LINX for Linux framework which is a fast IPC framework. My purpose was to be able to interact with control systems running our OSE real time operating system which also implements this kind of IPC. LINX consists of a couple of kernel driver modules, a user space library and some configuration and control utilities. It is written in C. I had created a small demo with LINX in Android before where I compiled it separately and used static linking but for this project I wanted a complete port to the Android build system. It did not have any issues with bionic compatability so the port should be straight forward.

I just want to add a short disclaimer about LINX. I use it here since it is a good example of integrating a solution into Android from kernel drivers up to the API levels. This particular piece of code does add additional IPC mechanisms to the systems which more or less messes up the security model so do not use it unless you are aware of the implications. The steps needed to port code to Android described in this post are however applicable for any type of driver/framework/library that you may want to include on your product.

Adding kernel driver modules

The first step was to add the kernel modules to the Android build. One way would have built a new kernel and include them directly but for this project I chose to keep them as separate modules. Building for the kernel is not handled by the Android build system meaning that we build them as we would do with any Linux system. The target is an Atmel based development board and in the LINX module build I provide the headers and cross compilation toolchain for that kernel and architecture.

Now for the Android specific parts. We need to add the compiled kernel modules to the platform build system in some way and create an file that includes them in the system image when we build. Add a folder in the source tree where your project will go, device or external are suitable candidates. I created a folder called linx that will hold the entire linx port and in that I added a subfolder called modules where I place the prebuilt kernel modules. Now what we need is an Android makefile to copy them to the suitable place in the out folder for system image generation. This will look like:

LOCAL_PATH := $(my-dir)
include $(CLEAR_VARS)

LOCAL_MODULE := linx.ko


# This will copy the file in /system/lib/modules



The standard location for modules on the Android system image is System/lib/modules so that is where we copy them. If we build the platform now the build system will copy our precompiled module linx.ko to the system image that we use for our device. The next step is to make sure that we have the module installed on the system when we run it. This can either be done manually via the shell or via a script that we run during init.

In this case I have created a shell script to be launched from init.rc with the following content:

#linx init
insmod /lib/modules/linx.ko
insmod /lib/modules/linx_tcp_cm.ko
netcfg eth0 up
ifconfig eth0
mktcpcon --ipaddr= ControlConn
mklink --connection=tcpcm/ControlConn control_link

This includes installing the modules and configuring the network and LINX-link. We launched this from init.rc by adding:

#linx init script
service linx-setup /system/etc/

The setup script is added to the system image in the same way by including it as a prebuilt target.

LOCAL_PATH := $(my-dir)
include $(CLEAR_VARS)






Creating Android make files for the user space code

Now that we have the drivers in place the next step is to look at porting the user space libraries. The default LINX build system uses standard GNU make files but we need to create new ones adapted to the Android build system. Start out by adding the source files needed to the linx directory created in the Android source tree. This gives the following structure:  

I have the linx setup script and the main file in the top directory and then we have the source files in separate folders and the include files in the include folder. To illustrate how the Android make files for each source component is created we can use liblinx as an example. The file looks like:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)

LOCAL_MODULE := liblinx

We set our sources by specifying LOCAL_SRC_FILES and the name of the library by specifying LOCAL_MODULE. We also need to supply the header files in the include directory by specifying LOCAL_C_INCLUDES. Finally this is a shared library that we are porting so use the BUILD_SHARED_LIBRARY template. This will build the library with the Android build system and add it to the system image as a shared library with the name

The rest of the code is moved to the Android build system in the same way, by creating files and specifying type and any dependencies. As another example we may look at the syntax for building the mktcpcon configuration program. This depends on the library we just created and hence the makefile looks entry looks like:

LOCAL_SRC_FILES := mktcpcon.c
LOCAL_MODULE := mktcpcon

Here we use the BUILD_EXECUTABLE template and we also need to specify static and shared libraries that we link against.


I hope that provides some insight in how you setup the build for an existing linux project to run on an Android system. The steps to follow are:

  • Build any kernel related things using the correct kernel build system and config for your device
  • Add the kernel modules (and/or kernel) to the platform build system and create files for them using the prebuilt template.
  • Create config and intialization services for your drivers if needed and add them to init.
  • Move the rest of your code (user space) to the Android source tree and create files for them.
  • If you encounter build errors work them out in the source code and see what incompatabilities your code have with the specifics of the Android C-runtime.

That wraps up my post for today. Having done this we are now able to use our added drivers and API:s from native programs running in the shell. The next step is to create a JNI layer and java library to allow regular Android applications to make use of our platform additions.

I have been away for half a year on paternal leave (nice Swedish benefit) but now it is full time Android hacking again and pushing the team to publish things. Hopefully you will see more activity here including a follow up on this post discussing applications APIs.


Why use returned instance after save() on Spring Data JPA Repository?

Here is the code:

public interface AccountRepository extends JpaRepository<Account, Long> {}

JpaRepository from Spring Data JPA project.

Here is the testing code:

public class JpaAccountRepositoryTest extends JpaRepositoryTest {
    private AccountRepository accountRepository;

    private Account account;

    public void createAccount() {
        Account returnedAccount =;

        System.out.printf("account ID is %d and for returned account ID is %dn", account.getId(), returnedAccount.getId());

Here is the result:

account ID is 0 and for returned account ID is 1

Here is from javadoc:

Saves a given entity. Use the returned instance for further operations as the save operation might have changed the entity instance completely.

Here is the actual code for SimpleJpaRepository from Spring Data JPA:

    public T save(T entity) { 
            if (entityInformation.isNew(entity)) {
                    return entity;
            } else {
                    return em.merge(entity);

So, the question is why do we need to use the returned instance instead of the original one? (yes, we must do it, otherwise we continue to work with detached instance, but why)

The original EntityManager.persist() method returns void, so our instance is attached to the persistence context. Does some proxy magic happens while passing account to save to repository? Is it the architecture limitation of Spring Data JPA project?


The save(…) method of the CrudRepository interface is supposed to abstract simply storing an entity no matter what state it is in. Thus it must not expose the actual store specific implementation, even if (as in the JPA) case the store differentiates between new entities to be stored and existing ones to be updated. That’s why the method is actually called save(…) not create(…) or update(…). We return a result from that method to actually allow the store implementation to return a completely different instance as JPA potentially does when merge(…) gets invoked.

Also, persistence implementations actually capable of dealing with immutable objects (i.e. not JPA) might have to return a fresh instance if the actual implementation requires populating an identifier or the like. I.e. it’s generally wrong to assume that the implementation would just consume the entity state.

So generally it’s more of an API decision to be lenient (permissible, tolerant) regarding the actual implementation and thus implementing the method for JPA as we do. There’s no additional proxy massaging done to the entities passed.


How to get wifi standard

How can I get and change wifi standard, which I’m using now in my android device. For example: IEEE 802.11b or IEEE 802.11g or IEEE 802.11n. If this is possible.


Its not possible to get what type of network the phone is connected to. However you can find the speed of the network:

    WifiManager wifiManager = Context.getSystemService(Context.WIFI_SERVICE);
    WifiInfo wifiInfo = wifiManager.getConnectionInfo();
    if (wifiInfo != null) {
        Integer linkSpeed = wifiInfo.getLinkSpeed(); //measured using WifiInfo.LINK_SPEED_UNITS

P.S.: You can probably guess the type of network by interrogating the encryption on the network. But there is no built in method.

Source: stackoverflow
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Privacy Policy, and Copyright Policy. Content is available under CC BY-SA 3.0 unless otherwise noted. The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 © No Copyrights, All Questions are retrived from public domain..