I once wrote a simple program - that turned out to have no use, but was good practice - that simply takes microphone input and routes it right back out the speakers.

There were several oddities in the program I could ask about, but the main one that interests me here is the establishment of an AudioFormat for such input - from the microphone. It's easy from a file because you can just use AudioSystem.getAudioFormat() on it, but I don't think that can be done for the mic.

I was sure there had to be a better way than this, but I could not find any method to automatically get an appropriate format, and indeed, in searching about it, I saw people manually setting up the sample rates and such.

This is what I mean (this is the format for the mic):

AudioFormat inFormat = new AudioFormat(41000.0f, 16, 2, true, true);
DataLine.Info inputInfo = new DataLine.Info(TargetDataLine.class, inFormat);

And the output format for the speakers:

AudioFormat outFormat  = new AudioFormat(41000.0f, 16, 2, true, true);
DataLine.Info outputInfo = new DataLine.Info(SourceDataLine.class, outFormat);

I had to play a lot with those settings - particularly the sample rate - to even get the sound right.

I also feel like this is a brittle way to do it. That is, if I have a right understanding of this - writing it this way is like saying "go find me whatever input source can give me audio input with this format, and use it." So that doesn't seem like it's DEFINITELY the microphone.

Compare of course to the ease of establishing a format for a file:

theTimeIsStream = AudioSystem.getAudioInputStream(theTimeIsFile);

It's automatically provided.

So what I am wondering is, is it actually necessary, when reading input to the mic and writing it to the speakers, to manually fiddle with the format settings every time like that, or is there a better or more sure way of doing it?