Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FateFree
Nov 14, 2003

How do i generate a random long from 1-7 billion, when there is no range function like there is with integer?

Adbot
ADBOT LOVES YOU

FateFree
Nov 14, 2003

Yes thats handy for ints, but I can't get a number over 2 billion that way. There is no corresponding long(long range) method in random, i imagine because the int version uses a long to catch overflows.

So I guess something special has to be done for my case but I'm not sure what it is.

FateFree
Nov 14, 2003

Parantumaton posted:

If your desired range is 1...7 billion and you can get to 2 billion, why not just do that one to three times, add those three together and then add another billion?

Well the chances are completely different. To hit 7,000,000,000 exactly should allow 1/7,000,000,000. Your way requires 1/2,000,000,000 three times in a row, which is (1/2,000,000,000) ^ 3 which is astoundingly higher

FateFree
Nov 14, 2003

Psychorider posted:

The same way you do it if you don't have a range function for integer seems to work.
code:
long random = 1000000000L + (Math.abs(new Random(seed).nextLong()) % 6000000000L);
Or to modify Parantumaton's idea slightly
code:
Random r = new Random(seed);
long random = 1000000000L + (r.nextInt(3) * 2000000000L) + r.nextInt(2000000000);
Although the first way is better obviously.

Hmm i see where the confusion is, i didn't mean from 1 billion to 7 billion, just the old school 1 to 7 billion. So should this be the correct approach?

code:
Math.abs(new Random(seed).nextLong() % 7000000000L) + 1
I guess what makes me worry is that the absolute value function basically gives two random values that are the same, 10, -10 for example.

FateFree
Nov 14, 2003

LakeMalcom posted:

For the random number/long guy: how about this: http://java.sun.com/j2se/1.4.2/docs/api/java/math/BigInteger.html#BigInteger(int,%20java.util.Random)

Thanks, I have taken this approach and am stumped by a simple math problem.

So 2^33 is the lowest power greater than 7billion, so I can use new BigInteger(33, rand) and discard anything over my range. My question is how I can determine the number 33 dynamically.

In math terms, how do I find the smallest power of 2 thats greater than a given number, in fast java code?

FateFree
Nov 14, 2003


Ah thanks that seems to work perfectly. Heres what I ended up with:

code:
/**
	 * Return a random long from 0 to max, exclusive
	 * 
	 * @param max
	 * @return long
	 */
	public long randomLong(long max) {
		if(max < 0)
			throw new IllegalArgumentException("Max must be greater than 0");
		
		long value = new BigInteger(BigInteger.valueOf(max).bitLength(), getRandom()).longValue();
		return value < max ? value : randomLong(max);
	}

FateFree
Nov 14, 2003

Mulloy posted:

code:

     A bunch of stuff to create windows and buttons and listeners.
     public class WindowOpener implements Runnable
     {
           public void run()
           {
                engine.processGrid();
           }
      }

}
When I try to run processGrid() from within TwoWindows it throws a null exception. Not sure what I'm misunderstanding. It's specifically the array class Grid I'm trying to have everything reference the same instance of that seems to be failing.

I'm surprised it even compiles. What is engine referring to? I dont see an instance variable.

But anyway, assuming there is one, are you instantiating aGrid anywhere? you set it statically but its not set to anything, so by default its null.

FateFree
Nov 14, 2003

I have an architecture question that surfaced after a huge refactoring of my web app framework (thanks to ignoring generics for so long).

I have a GenericDao<T> class to retrieve objects from the database with type safety. Standard methods like save(T), List<T> findAll(), etc.

I want to add a new method that adds restrictions to queries to enable things like finding all Users by username. The method signature would look something like List<T> findBy(String field, Object property).

However I want to offer some protection and not actually accept Strings and objects, but something more type safe. For instance, I should only allow the GenericDao<User> to call findBy like so: findBy("username", String), where "username" is a fixed value in the form of a constant or even better, an Enum.

I can define an Enum of acceptable values but I run into an inheritance problem because I want all of my objects to be able to accept the common properties such as Long id, Long timeCreated, Long timeModified. As I understand I can't extend an Enum class which has some common values.

Can anyone give me some insight to how the best way to architect this may be? Even if its not a solution, some direction might help clarify things for me.

FateFree
Nov 14, 2003

I am a reflection noob. I have a class with holds serveral static IProperty interfaces, and some static IReference interfaces. If I pass an object into a utility method, could I use reflection to get the instances of all these interfaces, without knowing their property names, so I could do a for(IProperty property : properties) loop?

FateFree
Nov 14, 2003

Question about generics and inheritance. I have a domain model hierarchy like so:

Persistable > Address

Persistable as a parent has fields that all persistable objects have, like id, timeCreated, timeModified.

Address is a subclass which has properties such as address, city, state, yada yada.

I have a Dao class which allows for dynamic where clauses. Traditionally, I just used static strings to identify columns, like so:

code:
Persistable {
 public static String ID = "id";
 public static String TIME_CREATED = "time_created";
}

Address extends Persistable {
  public static String CITY= "city";
}
Recently we upgraded to Java 1.5, so I wanted to add type safety. At first I looked into enums for these fields, but I didn't like the fact that I couldn't extend the enums of the Persistable class. So instead, I created (rjmccall's) Property<T, V> class, where T is the type of the class, and V is the type of the value. Now the hierarchy looks like this:

code:
Persistable {
 public static Property<Persistable, Long> ID = ...; // instantiate
 public static Property<Persistable, Long> TIME_CREATED = ...;  // instantiate
}

Address extends Persistable {
  public static Property<Address, String> CITY = "city";
}
This enables the Dao to have some type safety. For example, if I want to search by id or address, i would do this:

code:
PersistableDao<T extends Persistable> {

 public List<T> find(Property<T, V> property, V value);

}
PersistableDao<Address> addressDao = ...; // instantiate
addressDao.find(Address.ID, 1L);
addressDao.find(Address.CITY, "new york");
With the type safety in place, the addressDao will throw a compile error if the value type passed in doesn't match the type of V, which is great. The problem is, this code doesn't compile. The first find errors because the static ID property was defined with a type of Persistable instead of the more specific Address. I thought since Address extends Persistable, it would still work. But unfortunately it doesnt. The only way to make it compile was to move the static ID into the Address class like this:

code:
Address extends Persistable {
  public static Property<Address, Long> ID = ...; // instantiate
  public static Property<Address, String> CITY = "city";
}
But now I'm stuck in the same way, I cant use inheritance on these properties anymore. Does anyone have any idea how I can make this work while maintaining the inheritance of my common Properties? Its almost like I need for the find method to accept T OR a parent of T.

FateFree fucked around with this message at 00:21 on Jan 26, 2010

FateFree
Nov 14, 2003

rjmccall posted:

An excellent idea! Surely your work is touched with genius.

code:
public <V> List<T> find(Property<? super T, V> property, V value);

Well thanks, I tried to use the super property but apparently I wasn't using it correctly. Your way solved my problem.

Was there something wrong with my original Property<T, V> idea in general for the sarcastic genius remark?

FateFree
Nov 14, 2003

rjmccall posted:

Nope; it's just that it's pretty much exactly what I told you to do twenty days ago when you asked this question then.

At touche salesman. I editted my last question, since I asked on a couple forums I didn't remember that I asked here. But works great now, thank you.

FateFree
Nov 14, 2003

I'm bored and want to try something a little different. I want to code some people objects, and more importantly try to code their thought process as close as possible to the real thing, which I might expand into a game at a later point but is really just a POC at this point. I'd like to have a virtual room full of them and basically watch them interact.

This is far from what I normally code so my question is how to best architect this. The first things that come to mind are should each person run in their own thread, should I be looking into the Executor class for that? Can anyone give a high level overview or any resources so I can start with the implementation?

FateFree
Nov 14, 2003

Im using reflection for a Service toString. As an example, AccountService has a field UserService, which is an interface implemented by UserServiceImpl and its parent is AbstractServiceImpl.

Assuming I already have the Field object for the UserService, how can I determine if it is an implementation of AbstractService? For example if I print out field.getType, I see the interface UserService. But if I try to do AbstractServiceImpl.class.isAssignableFrom(field.getType()), it always returns false.

Sorry for the convoluted question but it gets down to what condition can I use so that any Field that extends AbstractServiceImpl returns true?

FateFree
Nov 14, 2003

If I'm making a generic class <T extends BaseObject, V> how can I enforce that V is a long? If I say <V extends Long> I get an ugly warning about not being able to extend final classes, even though it seems to accomplish what I want. Is there any other way?

FateFree
Nov 14, 2003

Yes this turned out to be the answer. I got caught up in generics insanity before I realized it. Thanks.

Now for something a little harder. I am using EhCache (thread safe cache, based on ConcurrentHashMap) to store a counter (AtomicInteger). The counter is basically the count of all rows in the database for some query. I'd like to pro-actively manage this value to reduce the number of queries, so I have code like so, that I will call any time someone saves a new object to the database:

code:
AtomicInteger counter = get(key);

if(counter != null)
    counter.incrementAndGet();	
Simply pulling the counter from ehcache, performing null check, and incrementing the value which is remaining in cache.

My question is, is this thread safe? I know the AtomicInteger is a synchronized increment operation, and I know that ehcache is thread safe, but I don't know if this particular block of code is safe to use together. I think it is because of AtomicInteger, and if it was a regular int with counter++ it would be unsafe. Still I'd feel better with someone elses opinion.

FateFree
Nov 14, 2003

Well lets see, there may not be a count in cache yet, and I would not have the information to add one. All I know is on a new save, its count++ and a delete its count--, the actual number requires a query (whoever is requesting the count will first check cache, if not there, queury and place it in cache).

But if another thread inserted something after my get, it means they will be putting the most up to date number in cache since its coming directly from the query. So i believe in that sense it is safe.

FateFree
Nov 14, 2003

Just when you think youve got something figured out haha. Thank you sir. I suppose the best solution is to maintain the count in cache, and just clear it on save or delete operations. I guess there is no way to ever guarantee an accurate count, even if it was never cached another thread could have saved a value immediately after I retrieved mine.

FateFree
Nov 14, 2003

tef posted:

http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ConcurrentHashMap.html#putIfAbsent(K, V)

if you used this instead of the null check you could avoid the race.

Unfortunately while the idea of the putIfAbsent is nice, the only way it could ever be effective is if you passed some sort of interface that will create the value, rather than sending the actual value. Imagine if the value came from an expensive database call, you would only want to make that call if it is in fact absent from the cache, but this method would require you to send the value every time.

I'm hoping one day they improve that..

FateFree
Nov 14, 2003

Can anyone explain why I cant seem to get the following code to work with try finally blocks? This is for a gzip compression algorithm:

Compress:
code:
	private byte[] compress(String input) throws IOException {
	    ByteArrayOutputStream bos = new ByteArrayOutputStream();
	    BufferedOutputStream bufos = new BufferedOutputStream(new GZIPOutputStream(bos));
	    bufos.write(input.getBytes());
	    bufos.close();
	    byte[] retval = bos.toByteArray();
	    bos.close();
	    return retval;
	}
Decompress:
code:
	private String decompress(byte[] bytes) throws IOException {
		ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
	    BufferedInputStream bufis = new BufferedInputStream(new GZIPInputStream(bis));
	    ByteArrayOutputStream bos = new ByteArrayOutputStream();
	    byte[] buf = new byte[1024];
	    int len;
	    
	    while((len = bufis.read(buf)) > 0)
	      bos.write(buf, 0, len);
	    
	    String retval = bos.toString();
	    bis.close();
	    bufis.close();
	    bos.close();
	    return retval;
	}
In its form now it works fine, I catch the ioexception later and wrap it in a compression exception. But what I would like to do is make sure the streams are closed in a finally block before I rethrow the exception, however any attempt I make to modify this code and put the close() calls in a finally block results in a "EOF unexpected end of zlib input stream exception".

Here was one such attempt:
code:
private byte[] compress(String input) throws IOException {
		ByteArrayOutputStream baos = null;
		GZIPOutputStream gzos = null;			
		
		try
		{
			baos = new ByteArrayOutputStream(); 
			gzos = new GZIPOutputStream(baos);
				
			byte[] baFileContent = input.getBytes(); 
			
			for(int i = 0; i < baFileContent.length; i++) 
				gzos.write(baFileContent[i]);
			
			return baos.toByteArray();				
		}
		finally
		{
			if(isNotNull(gzos))
				gzos.close();

			if(isNotNull(baos))
				baos.close();
		}
	}
code:
	private String decompress(byte[] bytes) throws IOException {
		ByteArrayInputStream bais = null;
		ByteArrayOutputStream baos = null;
		GZIPInputStream gzis = null;		
		
		try
		{
			bais = new ByteArrayInputStream(bytes);
			baos = new ByteArrayOutputStream();
			gzis = new GZIPInputStream(bais);
			byte[] buffer = new byte[1024];
			
			for(int len; (len = gzis.read (buffer,0,buffer.length)) != -1; )
				baos.write(buffer, 0, len);
			
			return new String(baos.toByteArray());
		}
		finally
		{
			if(isNotNull(gzis))
				gzis.close();

			if(isNotNull(bais))
				bais.close();

			if(isNotNull(baos))
				baos.close();			
		}	
	}

FateFree
Nov 14, 2003

Thanks for the explanation, I appreciate it. With that in mind, is it possible for me to consolidate my input and output streams since they seem to just be wrapping themselves a few times, and call close on only the parent stream? For instance:

This
code:
ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
BufferedInputStream bufis = new BufferedInputStream(new GZIPInputStream(bis));	   
to this:
code:
BufferedInputStream bufis = new BufferedInputStream(new GZIPInputStream(new ByteArrayInputStream(bytes)));	   
and call close on just the buffered input stream?

FateFree
Nov 14, 2003

Sorry to bring this back but I have a concurrency question about using ConcurrentHashMap. I've voiced my complaints in the past about the putIfAbsent method not taking a callback interface so that an expensive object could be put into cache only if it is absent from the cache. While waiting for such a thing to exist I defined my own cache interface with a method called getOrPut, which does the same thing through the use of a callback:

code:
@Override
	public V getOrPut(K key, ICreateValue<V> createValue) {
		Object element = concurrentMap.get(key);		
		
		// if the element was not found in cache create it
		if(element == null)
		{
			V object = createValue.create();
			
			// store in cache
			concurrentMap.put(key, object);
			
			return object;
		}
		
		return element;	
	}
As you can see there is a race condition here, and while it previously hadn't been a problem I can see how it may cause the value in cache to be invalid over time. Synchronizing this method of course defeats the purpose of using a concurrent hash map to begin with.

So I began to think of an alternative solution. What if I were to call concurrentHashMap.putIfAbsent(Key, ValueWrapper), where ValueWrapper is an object that contains the object to be cached, and an ICreateValue interface which creates the object.

So for instance the flow would go like this:

Client calls cache.getOrPut with a key and an ICreateValue callback to create the expensive object (most likely from db).

ConcurrentMap stores the key and ValueWrapper with putIfAbsent.

After ValueWrapper is retrieved from cache, call getValue() and return that to the client. ValueWrapper would look like this:

code:
public ValueWrapper {
ICreateValue createValue;
Object object;

  public synchronized Object getValue() {
     if(object == null)
        object = createValue.create();

     return object;
  }

}
Wouldn't this essentially ensure that only one expensive object is created, while also leaving the synchronization around the cache untouched and speedy? Essentially the only synchronization blocking should be around threads looking for the same element in cache, and it would probably only be slow for the first time before the object is created.

I'd really appreciate thoughts about this, especially if I am missing something.

FateFree
Nov 14, 2003

I've been spoiled by web frameworks for too long and now I need to write a gritty ServletFilter myself and I need a little help on how to do it.

The purpose of the filter is to run last in the filter chain, take all the html that is meant to be returned as the response, and strip out all the content except for the elements that match a given id, and return the fragmented response.

Parsing aside, how do I go about getting the html body from the response, and resend a modified version of it? How do I ensure that the response won't get partially flushed while I'm working on it?

FateFree
Nov 14, 2003

Thanks thats a great high level explanation, I'll get looking into that. Appreciate it

FateFree
Nov 14, 2003

rhag posted:

The most common answer I've seen on the forums from the devs is to use a Set. I can see their reasoning, and it's a sound one. However, using a Set poses a different set of problems (namely, one cannot just add children to the parent without an ID set. That is...before the children are saved).

This doesn't have to be a limitation. My solution to the equals/hashcode problem with hibernate is two have two primary keys, an autoincremented pk, and a String UID which is just mapped as a unique field. However my hashcode/equals only bothers looking at the uid, which gets created on object creation through UUID.randomUID.toString.

So this way you can put any object in a set before persisting it, and the equals will work after you persist it because hibernate will set the UUID it received from the table. Make sense?

FateFree
Nov 14, 2003

I was wondering if anyone had a clever solution to this problem. I have a method that takes a String of html, and a Set<String> of ids pertaining to elements in the html. The purpose is to take the original html, find all of the tags that have one of the matched ids, and return a new string fragment of html.

So for example: Given this html and a set of ids {"1", "4"}

code:
<html>
 <div id="1"></div>
 <div id="2"></div>
 <div id="3"><span id="4"></span></div>
</html>
I would expect this result:

code:
<div id="1"></div>
<span id="4"></span>
Currently I am using JSoup which parses the html in memory, and then traverses it looking for elements matching the ids I supply, and that works fine. As a bit of a hobby though I'm looking for a way to do this very fast.

The only thing I can think of is a sax parser solution, where I maintain a map of String id > String htmlFragment. This seems pretty tough to implement correctly and efficiently. I wonder if a regular expression might work, although that seems pretty ugly. I'd welcome any ideas. And yes the html is valid xml.

FateFree fucked around with this message at 20:09 on Dec 15, 2011

FateFree
Nov 14, 2003

I'm trying to create a zip file on the fly containing a bunch of csv files to return from a servlet and its confusing as hell with writers and readers and streams. A little guidance would be great. Here are chunks of code I have that somehow need to work together:

code:
ZipOutputStream zip = new ZipOutputStream(outputStream); // output stream coming from httpResponse, thats all fine

// using the openCSV library to create the csv file, if any others are easier please let me know
CSVWriter writer = new CSVWriter(Writer?); // what writer do I use? I don't want to write to a file
writer.writeNext(entries); //assume i get these values somewhere
writer.close();

// at this point should I have the csv file in memory somewhere? and then try to copy it into the zip file?
	
int length;
byte[] buffer = new byte[1024 * 32];	
zip.putNextEntry(new ZipEntry(getClass() + ".csv"));
					
// the 'in' doesn't exist yet - where am I getting the input stream from?
while((length = in.read(buffer)) != -1)
	zip.write(buffer, 0, length);
	                
zip.closeEntry();
zip.flush();

FateFree
Nov 14, 2003

rhag posted:

I don't really understand why one would need to use a library but anyway. From the looks of it, CSVWriter accepts a writer as an argument on the constructor. You can pass it a StringWriter as the writer to write to, and then just call stringWriter.toString() to get the data it wrote. The buffer will become something like
code:
buffer = stringWriter.toString().getBytes()

Thanks for the info, I actually figured out a way to do it without the string writer, and just use the output stream for the zip directly:

code:
CSVWriter writer = new CSVWriter(new OutputStreamWriter(zos);
I used a library for the csv because i didn't want to worry about escaping commas and quotes and whatever else is standard for a csv file.

FateFree
Nov 14, 2003

I need to collect metrics in memory and occasionally flush the list contents to the database. What's the best data structure to use that will be threadsafe while metrics are being collected, but that I can also switch out to dump the ones in memory to the database in a timer thread?

Should I just use Collections.synchronizedList, and then every now and then swap references from a live list to a secondary list so metrics can be collected while the dumping is going on? Is there any simple pattern for doing this?

FateFree
Nov 14, 2003

Hard NOP Life posted:

I think that a ConcurrentLinkedQueue will do just what you want without having to use a Collections.synchronizedList. However I don't think your approach to switching out the references is the correct way to do it.

I suggest that the dumping job immediately creates a copy of the queue, then does the inserts and finally tells the queue to removeAll elements in your copy. If no one else but this dump job is reading the queue then this should be safe.

Another possible more light-weight approach is for the dump job to create an iterator and then for each element, removes and insert it / batch it into your bulk insert. This however might not terminate if the iterator reflects new items that were added to the queue after its creation, which the javadoc doesn't spell out.

Hmm thanks lets see... the javadoc makes it sound like it may get new elements in which case I'd have the same concern about the job never ending.

However in your first suggestion I like the fact that removeAll isn't called until the end of the dump job - if I do the dump in one transaction and it fails, its nice that those elements won't be lost and the next job can get them all. To do the copy should I just pass the original list into a ConcurrentLinkedQueue(collection) constructor? If so this should work out pretty well, I appreciate it.

edit - well this turned out pretty easy:

code:
private ConcurrentLinkedQueue<Metric> queue = new ConcurrentLinkedQueue<>();
      
@Override
public void add(Metric metric) {
    queue.add(metric);
}

@Override
public void flush() {
   List<Metric> metrics = new ArrayList<>(queue);
   save(metrics);
   queue.removeAll(metrics);
}

FateFree fucked around with this message at 00:46 on Aug 7, 2013

FateFree
Nov 14, 2003

Kilson posted:

Can't you just use .drainTo() on the queue? Just drain the queue to a new list, and it's atomic.

Well theres no drainTo function, but I'd rather not drain the queue because if the save fails I'll lose all the metrics in memory. With removeAll I'll empty them only if they successfully save to the db, otherwise they'll stick around for the next attempt.

FateFree
Nov 14, 2003

Kilson posted:

Of course, in cases where you *know* that you won't actually throw the checked exception (and there are many such cases), then do what you want. It just looked like you were advocating the Spring approach of wrapping *all* exceptions as runtime, which I still say loving sucks.

What's wrong with having only runtime exceptions? Thats how new languages like c# handle exceptions. Checked exceptions have caused more pain and harm and general misuse than any other feature. All it takes is a @throws declaration in the comments and anyone whose calling your method can choose to handle a runtime exception or ignore it.

FateFree
Nov 14, 2003

I'm working on a large application that's locked into a terrible database that cannot perform. All of our queries are hitting an 'Index' table, which is a simple table with three columns: customerId, indexName, indexValue. It might contain things like: 1, FirstName, John - and a search by name query would be something like Select customerId from Index where indexName=FirstName and indexValue=John. The customerId is later joined to another set of tables.

Changing the database is not an option - but we are able to move the indexing off of the database onto something else. I'm looking for the best suggestion. Right now we are considering Lucene, because some of the indexed fields require fuzzy searching. However there is very little document searches, we really only have name/value pairs that link back to a customer.

Is there anything more practical than Lucene for this kind of indexing? We essentially just need a quicker way to maintain an index since our database can't do it.

FateFree
Nov 14, 2003

rhag posted:

What do you mean by cannot perform? Has that index table indexes created for the indexName and indexValue fields? Such as CREATE INDEX IDX1 on Index (indexName,indexValue) ? Is customerId a primary key on the other tables? Or has an index made for it?

However you look at it though, the existence of the Index table doesn't look good. Changing the database may not help in this case. Changing the developers may.

We are running a Teradata database. It is not meant to be an operational database, it will never be an operational database, and trying to make it perform like an operational database will never work. This was big corporation political fuckery that cannot be unfucked so it needs to be taken as a constant. (It is so bad that we are now storing our ENTIRE data model as a json object in one table, thus we have no indexes because we essentially have no data model). I can go on for days about how terrible this situation is but it will not help as this is the situation and no one will change it.

So the problem is that we have a JSON blob as our entire database and no way to query it, thus the terrible index table was born. Even that has been tuned to poo poo by Teradata's experts and its not good enough. So whats are some alternative choices for an indexing application? We would really like something thats already built and open sourced - our best choice at the moment is Lucene because it can handle fuzzy searching on some of these index values like email and first name. Are there any other options that might work?

FateFree
Nov 14, 2003

Are you sure its not just the first line and the argument text isn't null? I'd bet thats the problem.

FateFree
Nov 14, 2003

Sab669 posted:

Any recommendations on some absolute bare-bones First Programming Language books for Java? My roommate is going into his second semester of Comp Sci and he's really struggling. Decided to wait until the last week of vacation to buy a book and try to get back into it before the next semester starts :v:

Java, A Beginner's Guide seems to have good reviews.

I like the Head First java series for complete beginners. It has pictures.

FateFree
Nov 14, 2003

Can't you just have an AbstractOptions class that your base method uses, and have all your subclasses have inner classes that extend it? Subclass.Options extends AbstractOptions?

Otherwise you'll just have a base class that has generics, BaseClass<? extends AbstractOptions>, and your subclasses SubClass<SubClass.Options>, and Subclass.Options extends AbstractOptions.

This will let you do things like SubClass.Options options = subClass.getOptions();


baka kaba posted:

I could just have the method accept everything and do an internal type check and save myself the headaches

This will cause the headaches, not save them!

FateFree fucked around with this message at 21:14 on Jan 22, 2014

FateFree
Nov 14, 2003

baka kaba posted:

The trouble (I think) I've been running into is having a generic class like Subclass<Subclass.Options>, which seems to work fine for the definitions - but when it comes to using it that inner Subclass is considered a raw type, and there are obviously complaints. I'll have to take a look at exactly what is going wrong, but I'm just worried that it's a sign that I shouldn't even be trying to do this, at least not this way

Generics in Java are notoriously horrible, don't take any warnings as a sign you are doing the wrong thing. In some cases, suppressing warnings are the only thing you can do. But in your case, I don't see how it wouldn't work..if its complaining about raw types that means you arent declaring SubClass<Subclass.Options> everywhere you use it.

Declaring the inner classes static is essentially the same thing as storing them in a separate class file.

FateFree
Nov 14, 2003

Tusen Takk posted:

Gaze that abortion of code and shame me

Your code would have been ten times more readable if you just gave your labels and inputs actual names instead of label1, label2. Aren't you just forcing yourself to memorize what these things mean when you modify your code? Think about how much easier it would be if they were named commandTextField and commandLabel.

And use imports for cryin out loud haha. You'll remove 30% of your code if you remove javax.swing everywhere.

Edit: nevermind looks like you are using something to auto generate terrible code.

FateFree fucked around with this message at 14:17 on Mar 15, 2014

Adbot
ADBOT LOVES YOU

FateFree
Nov 14, 2003


Do you have your MovieNode class available? I'm bored at work and I'm going to go through and clean up your code with some pointers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply