Good idea to include JAR files in Android library (potential conflict with other libraries)?

My Android library requires some of the newer Apache HTTPClient jar files so that I can upload a multipart file (See related SO question)

I’m pretty sure there will be some apps out there using my library with a different version of these Apache JAR files already, resulting in a conflict.

I’ve seen this happen quite a few times with android-support-v4.jar and third-party libraries. Is there any best-practice work-around, or should I implement multipart uploading from scratch?


You can repackage the apache http client so that it uses a new package name so that it does not conflict with any other versions on the classpath.

This question explains how to do it with the maven shade plugin,

This article explains how to do it with jarjar and ant.


Ajax MVC Partial Returning Correct Response Yet Fires the Error Handler

This has me completely stumped. So odd.

I have this Ajax function defined:

    type: 'GET',
    dataType: 'text/HTML',
    url: getLicenseeDetailsUrl,
    success: function (response) {
    error: function (xhr) {
        alert('Failed to get licensee details');

And I have it calling into my controller which has an action like:

public ActionResult LoadLicenseeDetails(long licenseeId)
    var model = new LicenseeDetailsViewModel();

    var licencesee = _licensingRepository.LoadById(licenseeId);
    var licenses = _licensingRepository.LoadLicenses(licenseeId);

    model.Licencee = Mapper.Map<Licensee, LicenceeViewModel>(licencesee);
    model.Licences = Mapper.Map<IEnumerable<License>, IEnumerable<LicenceViewModel>>(licenses);

    return this.PartialView("_LicenseeDetails", model);

This all seems to be working as expected without any errors, however it ends up firing the Ajax error function, not the success function.

Looking at the xhr.responseText I can see the correct response information from the action controller!!

All with a status 200 OK as well. What on earth am I doing wrong here?


What on earth am I doing wrong here?


dataType: 'text/HTML'

should become:

dataType: 'html'

Quote from the documentation of the dataType parameter:

dataType (default: Intelligent Guess (xml, json, script, or html))

Type: String

The type of data that you’re expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response (an XML MIME type will yield XML, in 1.4 JSON will yield a JavaScript object, in 1.4 script will execute the script, and anything else will be returned as a string). The available types (and the result passed as the first argument to your success callback) are:

“xml”: Returns a XML document that can be processed via jQuery.

“html”: Returns HTML as plain text; included script tags are evaluated when inserted in the DOM.

“script”: Evaluates the response as JavaScript and returns it as plain text. Disables caching by appending a query string parameter, “_=[TIMESTAMP]”, to the URL unless the cache option is set to true. Note: This will turn POSTs into GETs for remote-domain requests.

“json”: Evaluates the response as JSON and returns a JavaScript object. The JSON data is parsed in a strict manner; any malformed JSON is rejected and a parse error is thrown. As of jQuery 1.9, an empty response is also rejected; the server should return a response of null or {} instead. (See for more information on proper JSON formatting.)

“jsonp”: Loads in a JSON block using JSONP. Adds an extra “?callback=?” to the end of your URL to specify the callback. Disables caching by appending a query string parameter, “_=[TIMESTAMP]”, to the URL unless the cache option is set to true.

“text”: A plain text string.

multiple, space-separated values: As of jQuery 1.5, jQuery can convert a dataType from what it received in the Content-Type header to what you require. For example, if you want a text response to be treated as XML, use “text xml” for the dataType. You can also make a JSONP request, have it received as text, and interpreted by jQuery as XML: “jsonp text xml.” Similarly, a shorthand string such as “jsonp xml” will first attempt to convert from jsonp to xml, and, failing that, convert from jsonp to text, and then from text to xml.

Or even better, simply get rid of this parameter. jQuery is intelligent enough to use the Content-Type response HTTP header set by the server in order to deduce the correct type and process the parameter passed to the success callback.

Look at the Console tab of your javascript debugging toolbar in the browser. It will provide you with more information about the error.


Should Hibernate Session#merge do an insert when receiving an entity with an ID?

This seems like it would come up often, but I’ve Googled to no avail.

Suppose you have a Hibernate entity User. You have one User in your DB with id 1.

You have two threads running, A and B. They do the following:

  • A gets user 1 and closes its Session
  • B gets user 1 and deletes it
  • A changes a field on user 1
  • A gets a new Session and merges user 1

All my testing indicates that the merge attempts to find user 1 in the DB (it can’t, obviously), so it inserts a new user with id 2.

My expectation, on the other hand, would be that Hibernate would see that the user being merged was not new (because it has an ID). It would try to find the user in the DB, which would fail, so it would not attempt an insert or an update. Ideally it would throw some kind of concurrency exception.

Note that I am using optimistic locking through @Version, and that does not help matters.

So, questions:

  1. Is my observed Hibernate behaviour the intended behaviour?
  2. If so, is it the same behaviour when calling merge on a JPA EntityManager instead of a Hibernate Session?
  3. If the answer to 2. is yes, why is nobody complaining about it?


I’ve been looking at JSR-220, from which Session#merge claims to get its semantics. The JSR is sadly ambiguous, I have found.

It does say:

Optimistic locking is a technique that is used to insure that updates
to the database data corresponding to the state of an entity are made
only when no intervening transaction has updated that data since the
entity state was read.

If you take “updates” to include general mutation of the database data, including deletes, and not just a SQL UPDATE, which I do, I think you can make an argument that the observed behaviour is not compliant with optimistic locking.

Many people agree, given the comments on my question and the subsequent discovery of this bug.

From a purely practical point of view, the behaviour, compliant or not, could lead to quite a few bugs, because it is contrary to many developers’ expectations. There does not seem to be an easy fix for it. In fact, Spring Data JPA seems to ignore this issue completely by blindly using EM#merge. Maybe other JPA providers handle this differently, but with Hibernate this could cause issues.

I’m actually working around this by using Session#update currently. It’s really ugly, and requires code to handle the case when you try to update an entity that is detached, and there’s a managed copy of it already. But, it won’t lead to spurious inserts either.


What are template classes in Spring Java? Why are they called templates? For example jdbc-template, jms-template etc

I’m new to Java. I’ve only been programming it for about a year. What does Spring mean by the use of templates? In Spring, there is jdbc-templates, jms-templates etc.. What are template classes in java? Are they a special kind of design pattern or what?

Thank you in advance.


They are called template as use the Template method pattern.

Basically the idea is define the operation needed to do something in an abstract class or super class then implement a class that use the operation previous defined.

In the case of spring allow that operation that always need to be done for an specific purpose be done automatically, (open connection, obtain for pool, translation, execution, close connection), then user only need to call methods without worries about the previous tasks.


Does POI XSSF still have crazy bad memory issues?

A couple years ago, I ran into issues where I was creating large excel files using jXLS and POI XSSF. If my memory is correct, I think XSSF would create something like 1GB+ temp files on the disk to create 10mb excel files. So I stopped using jXLS and instead used SXSSF to create the excel files, but today I have new reasons to use jXLS or JETT.

Both jXLS and JETT websites seem to allude that performance is much better, but POI‘s XSSF website still says generically that the XSSF requires a higher memory footprint. I am wondering if this higher memory footprint is something like a reasonable 10% overhead these days, or if it is still like the 10,000% overhead as it was a couple years ago.

Are the crazy bad memory issues fixed with POI 3.9 XSSF? Should I not worry about using it with jXLS or JETT? Or are there certain gotchas to avoid? I am careful about reusing cell styles.


To answer your question, yes, POI will always use very large amount of memory when working on large XLSX files, which is much larger than the size of the XLSX files.
I don’t think this will change anytime soon, and there are pretty obvious reasons for that: XLSX is basically a bunch of zipped XML files, and XML is very well compressed (around 10x). Getting this XML just to sit in memory uncompressed would already increase the memory consumption tenfold, so if you add all the overhead of data structures, there’s no way you should expect a 10% increase in memory consumption over the XLSX file size.

Now, the good news is that as mentioned in the comments, Apache POI introduced SXSSF for streaming very large amount of data in a spreadsheet with very good performance and low memory usage. XLSX files generated this way are still streamed on the hard disk where they can end up taking quite a bit of space, but at least you don’t risk OOME when writing hundreds of thousands of rows.

The problem for you is that you won’t be able to get JETT to directly work with SXSSF, as it needs the whole document loaded in memory for performing template filling. JETT author quickly discussed this topic here.

I had the same problem, and ended up doing a two-step XLSX creation:

  1. A standard JETT XLSX template to generate headers and formatting. The last row of the first sheet contains cells with $$tokens$$, one per cell. I don’t use JETT to insert the large amount of rows.

  2. Once JETT did its work, I reopen the workbook, read then delete the $$tokens$$ on the last line of the first spreadsheet, and start streaming data with SXSSF row by row.

Of course, there are limitations to this approach:
– You cannot use JETT on any of the streamed rows during rows insertion (but you can before, to dynamically pick the order of the $$tokens$$ for example)
– Cells format won’t be copied unless you take care of it yourself with POI API. I personally prefer to format whole columns in my XLSX file, and it will apply to the streamed data.

This also works if you want to show charts using data inserted with SXSSF: You can define a Named Range with functions OFFSET and COUNTA, then create a Pivot table & Pivot Chart that will be refreshed when the XLSX is opened in Excel.

Source: stackoverflow
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Privacy Policy, and Copyright Policy. Content is available under CC BY-SA 3.0 unless otherwise noted. The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 © No Copyrights, All Questions are retrived from public domain..