Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Xenoveritas
May 9, 2010
Dinosaur Gum

Suspicious Dish posted:

No. All content goes through the HD pipeline now, regardless of vertical resolution. Note that encoders might do a better job with an upscaled video, so you might see a difference between an upscaled video and a source video, but it shouldn't be as drastic as before.

You sure? Because Admiral Curtis tested the advice to "always upscale to 720p" again in October and, well, YouTube still murdered his test video. So if that's the pipeline that's been in use "since April" it's still killing 480p videos.

Adbot
ADBOT LOVES YOU

Xenoveritas
May 9, 2010
Dinosaur Gum
I got it to go 480p for me, and it's just as bad. I'm still not entirely convinced that YouTube actually does 480p, though, and that their 480p option isn't just the 360p version upscaled.

Xenoveritas
May 9, 2010
Dinosaur Gum
Plus I'm pretty sure some capture devices will only capture 1080i. So it's not outside the realm of possibility to wind up with a 1080i recording, even if the game is capable of 1080p.

Xenoveritas
May 9, 2010
Dinosaur Gum
You mean no-delay preview? The AverMedia card I have has a very slight delay when recording. It's not much, but it's there. (Only while recording, mind you, the delay when just displaying is basically non-existent.)

If you want no-delay, use the HD-DVR's passthrough. That's literally what it's there for. There's no way to do a no-delay preview via the recording software.

Xenoveritas
May 9, 2010
Dinosaur Gum
It looks to me like it's a palette issue. Whatever's capturing the image isn't getting the correct palette, so you've got flat-out wrong colors. Which makes sense, if it's based on Age of Empires II, since as I recall that game ran using 256 colors.

If it's possible to force the game to use 16-bit or 24-bit colors (also called "thousands"/"millions" or "65k"/"16 million"), try doing that? (Check in the graphics options.)

Xenoveritas
May 9, 2010
Dinosaur Gum

Dongattack posted:

Let me ask you guys some audioadvice. If you have a commentarytrack which is largely spoken at the same volume, no Pewdiepie screams or anything, is there anything to be gained by compressing it in Audacity? Will that be more pleasant to listen to, or just unnatural?

Assuming you don't overdo the compression, yes, you're probably going to improve things by compressing it. Unless you really did manage to keep talking in an even volume throughout the entire thing (which I find hard to believe) there are likely sections where you're quieter than the rest of the commentary and that need to be boosted. A compressor is a good tool for dealing with these automatically.

quote:

Second question, also about compression, pretty much the same question. Vice versa, is there anything to be gained by compressing the game audio to get a more even audioexperience, or might that also be jarring and bland? I did this with a testing video earlier, it seemed fairly okay aslong as i manually boosted some key moments like exposition and scripted explosions in that case. But i don't know, having a battle here with pleasant viewing VS original experience.

Maybe. Personally I'd say "stick with the original experience" but that doesn't always work. In some cases, just to make the LP enjoyable, you might need to compress the game audio, especially when it needs to compete against commentary. If it improves things, it improves things, there's not much else to be said.

Xenoveritas
May 9, 2010
Dinosaur Gum
Use a headset? I think that's how most people deal with TV audio.

Xenoveritas
May 9, 2010
Dinosaur Gum
Did you know: headsets can be used for just audio and not the microphone part? Wear headphones for the TV audio. You should still be able to hear people in the same room through the headphones. If not, pipe the audio into the headphones themselves, I guess.

The best way to not pick up TV audio is to not have the TV audio playing at all.

Xenoveritas
May 9, 2010
Dinosaur Gum
Apparently the only way to get 1080p video on YouTube is when using DASH. Which can "intelligently" determine the bitrate it gives you, meaning that whatever you see may not be what other people see.

Xenoveritas
May 9, 2010
Dinosaur Gum

Dongattack posted:

I don't quite understand, do you mean that my connection might be giving me a quality worse than a person with a better connection would get? Cause that would explain why Youtube seemingly got worse for me when i was forced to switch to a wireless connection.
Exactly. What DASH does is basically split the video into a ton of small chunks, and then for each chunk makes copies at different resolutions and bitrates. When playing back, DASH decides how "good" your connection is and picks the next chunk based on that. In theory this means that on a slow connection you can still stream the video at a lower bitrate, while on a faster connection, you'll use a higher bitrate.

In practice YouTube seems to randomly decide I'm on dial up and refuses to give me anything other than blurry crap.

Xenoveritas
May 9, 2010
Dinosaur Gum

Eonwe posted:

So I'm considering picking up a capture card to stream/do LPs on my Wii U, and I'm wondering if the capture card will help with my PC as well. I'd kind of like to stream Battlefield 4, and I play at 1080p. However, when I start streaming I definitely take a huge performance hit. Is it feasible to even use a capture card to stream from a PC, and if so, would that free up system resources? I did read the OP, didn't notice anything addressing this so I apologize if this is a common question.

Capture card, maybe not, but something like the HD PVR, yes. The HD PVR does the encoding in the device, not on the computer, so you can potentially use it to record your computer video (assuming you have HDMI out and can connect the HDMI passthrough to your monitor).

Nvidia Shadowplay does something similar: recent Nvidia cards have a H.264 encoding chip on the card, and Shadowplay uses that to record an MP4 while you play, without requiring the CPU to encode, like something like FRAPS would do.

Keep in mind that neither of these options is as "good" as a lossless recording, but at a high enough bit rate, it's really good enough.

Xenoveritas
May 9, 2010
Dinosaur Gum
Wait, people bother filling out the email field when downloading iTunes? Not only can you uncheck the "email me" checkbox, you can just leave it blank. The website will still let you download it.

Or buy a Mac and get it with the OS.

Xenoveritas
May 9, 2010
Dinosaur Gum
As I recall, FFMpegSource deals with variable framerates by pretending all frames are the same length by default. (Which is moronic.) Try adding fpsnum=30 to the FFVideoSource line:

code:
V=FFVideoSource("C:\Users\Ciaphas\Desktop\Final Fantasy XIV Realm Reborn 12.26.2013 - 14.17.17.250.DVR.mp4", \
fpsnum=30)
See if that resolves the audio desync.

Xenoveritas
May 9, 2010
Dinosaur Gum

I am the M00N posted:

I just encountered an odd issue. My hauppage HDPVR gaming edition is recording a PS2 game, and was able to record all audio and video, but it did not record the in-game voices, as in the character's dialogue. I have no idea what causes this.

My best (only) guess is that the game is using 5.1 Surround and you're only recording the stereo tracks. How do you have your PS2 hooked up to the HDPVR? The only way I can come up with to allow something like that happening is if you have both the standard audio cables hooked to the HDPVR and the optical audio port hooked up to a separate sound system. Of course, I'm not even sure if the PS2 will output audio over both the conventional white/red cables and the optical port in the first place, but it's the only way I can think of to have character voices not be mixed into the game audio.

Edit:

khy posted:

I would like to stream my Wii/PS3 to a friend of mine online. What is the best capture device to use? Preferrably something USB since most of my internal expansion slots are in use or covered up by my video card, etc. I have no need for recording and only need a live view. I saw the devices in the OP and am leaning towards the AVerMedia C039 DVD EZMaker 7 as it's cheapest, but I'm wondering if there's anything of equal quality but cheaper (perhaps something without recording software?)

Nope. Anything that can stream is pretty much going to be able to record in any case, especially if it's USB. (Since USB doesn't have enough bandwidth to stream the raw video data in the first place, those capture devices work by encoding internal to the device.)

Xenoveritas
May 9, 2010
Dinosaur Gum
To be fair, no one has given the correct answer yet, namely, if you don't want to stream in a widescreen resolution, don't.

Open up the Settings, go to Video, and change the resolution to be whatever you want it to be. If your game capture is something like 640x480 and you want to stream it exactly, set it to that.

Otherwise, OBS has two choices to honor your request to stream a non-widescreen source at a widescreen resolution: pillar box it, or stretch it.

Xenoveritas
May 9, 2010
Dinosaur Gum

Cheez posted:

It's not the stream being widescreen that's the problem. it's the SOURCES. I specifically said so.

No, you didn't. You said:

Cheez posted:

When it stops deciding all my game capture windows should be widescreen, regardless of what's actually inside the box, then I'll think about it.

So presumably the source isn't widescreen and 4:3. I don't know, because you haven't said what the source is. Then you said:

Cheez posted:

You mean the same one that leaves those giant black bars on the side of the source no matter what you do because it refuses to remove them? Even with cropping?

So you're talking about a source getting padded on the edges in the stream, again pointing to trying to stream a non-widescreen source into a widescreen stream. And there are two solutions if you insist on streaming at a resolution other than the source: pad the edges, or stretch to fill.

Change your stream resolution to match your source and watch as the problem vanishes.

Xenoveritas
May 9, 2010
Dinosaur Gum

Cheez posted:

I didn't say what the source was? What the hell do you think "game capture" means? Do you even use OBS/Xsplit? They both have a game capture source.

OK, now we'd need details like what loving game you're trying to capture in the first place and what resolution the game is running at and what resolution you are trying to stream at.

Your problem is that the two don't match and so OBS has to do something to fix it. You can move and resize the game source all you want in your final output, you know, you have complete control over it. It's a trivial fix if only you could be assed to learn the software you're using.

Xenoveritas
May 9, 2010
Dinosaur Gum
Well, I still don't know what your problem actually is because you refuse to explain what you're trying to do, but you're right that Game Capture is buggy in OBS and always acts as if it's the resolution of the stream rather than the actual capture. However, it doesn't add black boxes to the edges and you can most certainly position it anywhere in the screen. Right click on the game capture, choose Properties, and uncheck "Stretch image to screen."

Yes, the red bounding box will still be wrong, but you can place it above another source and the edges will be transparent.

Or just use Window Capture instead, which works the way it should.

Xenoveritas
May 9, 2010
Dinosaur Gum

Cheez posted:

I didn't ask for help fixing the problems, and the fixes were not fixes. Why do I have to be thankful for this? In fact, most of the time people "helping" ignored half of what I said and had me repeating myself multiple times. I would have been better off completely ignoring everyone after the initial post and solution to my actual problem, that i actually wanted a solution to, and didn't involve trying to get me to pray to OBS Jesus.

No, you just failed to adequately describe your problem, so no one had any clue what the gently caress you were talking about and thought you were complaining about something that was trivial to fix.

Plus I expect window capture working with covered windows is dependent on Aero, since it's only when using a modern compositor that the covered portion of the window will remain. (Note that Windows 8 still uses the Aero compositor, it just ditched most of the flashy effects so it's less obvious.)

In any case, moving on...

Nvidia ShadowPlay and horrible audio desync

Someone's brought this up before. The latest Nvidia cards (650 and later) have a built-in H.264 encoder that can be used to record game footage using what Nvidia is calling ShadowPlay.

Unfortunately, the videos tend to have an issue at the start causing very low framerates and audio desync. By setting the fpsnum value when reading them using FFMpegSource, you can deal with the variable frame rate and get useful frames.

Sadly, this leaves the audio out of sync. When played back using Windows Media Player, the first several seconds are a garbled mess, but once you get past that, the video and audio work and are in sync. So clearly it's possible to get the bulk of the video working.

I've yet to figure out how to do that using FFMpegSource and AviSynth with essentially guessing and checking at a delay value.

So the question is simply: anyone have any suggestions on getting FFMpegSource to resync audio past the first couple of seconds of a video? Or is the best solution still "guess and check" or "transcode to lossless using something else and then edit in AviSynth"?

Xenoveritas
May 9, 2010
Dinosaur Gum

Cheez posted:

edit: Yeah, you know what? gently caress this. gently caress everything.

Don't let the door hit your rear end on the way out...

But seriously, I actually do have a thing I want to address.

quote:

being forced to use a loving piece of software he does not want or need

Yeah, no. No one's forcing you to use any software. Use whatever the gently caress you want. Hell, point your phone at your monitor and stream using FaceTime for all anyone here cares. There's no requirement that you use the software listed in the guides. Plenty of people use software that they're more comfortable with that aren't listed in the OP. If you don't want to use the software we recommend, you don't have to.

However you can't expect to find advice on software that no one here uses, and when you post claiming that a given piece of software doesn't meet your needs, people are going to try and solve whatever that problem is - this is a tech support thread! Especially when the problem sounds trivial. (And, seriously, "the red box doesn't exactly match the input source size" is a pretty freaking trivial problem.)

But in any case, everyone is free to use whatever software they want to LP. The TSF will, however, always try and steer people to the software we recommend - if only because it's what we're best able to provide answers about.

Xenoveritas
May 9, 2010
Dinosaur Gum
As I recall, the N64 is from that wonderful generation of consoles that outputs a 240p signal, AKA "a nonstandard video signal that happened to work with most analog TVs but is hit-or-miss with capture devices." So the problem may not be the capture device. See if you can connect a different source (such as a more recent console like a PS2/Xbox/GameCube or PS3/Xbox360/Wii) and if that works.

I've tried capturing from a PS2 running in PlayStation mode (where it outputs the same nonstandard 240p signal) and it's hit-or-miss at best with actually getting something.

Xenoveritas
May 9, 2010
Dinosaur Gum

Suspicious Dish posted:

TV signals have no standards. You just have to emit scanlines every once in a while. Instead of interlacing even/odd fields though, N64 and other early consoles simply always pushed even fields to the display.

Most digitizing hardware should be able to deal with this.

That's what I thought until I actually tried it. My (AverMedia, incidentally) capture card definitely did not like the PS2 when it was running a PlayStation game. I don't know what it is about the older console video signals, but it definitely fucks with capture cards. I think I was able to get it to work something like one out of four attempts.

I also tried capturing a SNES using composite, which didn't work. The only way I was able to get a signal from the SNES was to use an RF adapter.

Xenoveritas
May 9, 2010
Dinosaur Gum

SSJ Reeko posted:

Right, of course. Thing is, Megui isn't giving me any more info than what I did. It just stop encoding, says "error" on queue, and that's that.

http://i.imgur.com/Trpat6R.png?1

I'm unsure as to what more info I can give. I've attempted pulling audio from the script, the source video, and various audacity-ripped files for the video's audio in various formats.

Can you show us the log? (It's one of the tabs at the top.) It's unlikely to help, but it may give some hints.

Do you actually have NeroAAC installed?

It's also possible for MeGUI to completely screw itself up during an update, so it may be worth trying a fresh download and seeing if that magically works.

Xenoveritas
May 9, 2010
Dinosaur Gum
It's also possible to do the mixing in pure AviSynth:

code:
video = AviSource("original.avi")
audio = WAVSource("commentary.wav")

# Commentary audio needs to have the same sample rate as the original:
audio = audio.SSRC(video.AudioRate())

# If necessary, make commentary track stereo to match the original
audio = MergeChannels(audio, audio)

# Mix the commentary track and original audio together
audio = MixAudio(video, audio, 0.5)
# And dub it to the video
video = AudioDub(video, audio)
return video
However, you can't do ducking or anything like that using this method, so the "process in Audacity" method is still preferred.

Xenoveritas
May 9, 2010
Dinosaur Gum
Wait, it isn't? It's on the wiki. I know, because I personally added it to the wiki. It should be. I guess Zeratanis grabbed an earlier version of the page.

Xenoveritas
May 9, 2010
Dinosaur Gum

Mico posted:

Okay so here's what you do.

LoadPlugin("ffms2.dll")
v= FFVideoSource("recording.mp4")
a= FFAudiosource("recording.mp4")
Audiodub(v,a)

It will take a while the first time you open the script, it has to index the MP4 before being able to read it so if Virtualdub hangs, just let it do it's thing for a few.

And done in that order, it will take even longer, because when it indexes for video it skips indexing for audio so it basically has to index the file twice. There are two fixes: the simplest is to do the audio first:

code:
a= FFAudiosource("recording.mp4")
v= FFVideoSource("recording.mp4")
And the other it to explicitly index the recording:

code:
FFIndex("recording.mp4")
v= FFVideoSource("recording.mp4")
a= FFAudiosource("recording.mp4")

Xenoveritas
May 9, 2010
Dinosaur Gum
It shouldn't take that long. It is true that it kind of looks like your video player has crashed when you first open a script that needs indexing, but it shouldn't take anywhere near as long as the play time to index the file.

If you want a visible status indicator, there's also a command line indexer that will give a progress meter.

Xenoveritas
May 9, 2010
Dinosaur Gum
They look very nearly identical to me?

The thing to remember with YouTube is that, depending on how you're receiving video, quality can vary wildly. If you're using the Dynamic Adaptive Streaming over HTTP (DASH) version of the player, it will attempt to "intelligently" alter the bitrate depending on your current connection, meaning that two different parts of the same video may look wildly different depending on how fast YouTube decided your connection was at that given time. (If you're using the Flash player, you're likely using DASH.)

The second thing is that YouTube will reencode things to essentially a constant bitrate, meaning that when there's more motion on the screen, there will be less detail for static elements. There seems to be about the same amount of motion in both scenes in your case.

Short answer: looks fine to me.

Xenoveritas
May 9, 2010
Dinosaur Gum
The .ffindex file is the index we're referring to. It's required to allow "seek" access to the matching video. Basically it's what allows FFMpegSource to correctly produce frames for AviSynth.

If you delete it, the next time you open an AVS that uses that video, FFMpegSource will be forced to reindex the file and you'll have to sit through the process again. However that's all it's used for, it's not an AviSynth script itself, just a file FFMpegSource uses.

Xenoveritas
May 9, 2010
Dinosaur Gum

chocolatekake posted:

So now I can treat it just like a .avi but just reference the video using FFMpegSource instead of AviSource, right? Meaning I should be able to follow the wiki again?

Pretty much. Assuming that you've included FFMS2.avsi, then you can use FFmpegSource2 to load an MP4 exactly like you would an AVI.

If you haven't, you need to do that whole FFVideoSource, FFAudioSource, AudioDub thing posted earlier in the thread.

Another caveat is that if your MP4 source uses a variable frame rate, you'll want to set fpsnum to something reasonable (probably 30 if you're targeting YouTube). A lot of things that record to MP4 turn out to use a variable frame rate. So basically, all told, you end up with something like:

code:
FFIndex("recording.mp4")
v = FFVideoSource("recording.mp4", fpsnum=30)
a = FFAudiosource("recording.mp4")
v = AudioDub(v, a)
And then v is your video just like it were from AviSource.

Xenoveritas
May 9, 2010
Dinosaur Gum

Psion posted:

Okay, let's be specific then - I've got segmented video from nV shadowplay. It chunks off in 4GB intervals so you splice all the videos together. Thing is, there are minute variations in framerate from video to video so I can't just unalignedsplice them all at native (60 fps) or it throws an error. My goal is to splice and then edit the combined file for eventual output as one, edited video. In that case, should I use fpsnum=30? I've done this before using ChangeFPS(30) and it worked, but I want to follow best practices if possible.

You should be using fpsnum=30. ShadowPlay records variable frame rate MP4s. I'm pretty sure it literally dumps frames at "whatever speed they come out of the game" so if your game isn't running at exactly a constant frame rate, you'll wind up with a variable frame rate file.

Generally speaking, you should always be using fpsnum with FFVideoSource. At worst, it won't hurt anything.

Xenoveritas
May 9, 2010
Dinosaur Gum
I updated the TSF Wiki page on image hosting with this new info on Minus, one of you should probably check to make sure I got it right.

Xenoveritas
May 9, 2010
Dinosaur Gum
Hit reload? I changed it to 15 MB, so you're seeing the old version of the page for some reason. I'm not sure what the newly limited file types are so I left that kind of vague, and I'm also not entirely clear on what the new restrictions are for creating an account.

Xenoveritas
May 9, 2010
Dinosaur Gum
I give up. Wikia is a goddamned broken piece of poo poo, and I've given up trying to figure out why edits don't actually show up on it.

Xenoveritas
May 9, 2010
Dinosaur Gum

Major_JF posted:

Other than "doesn't use vfw codecs" why is it bad?

Have you ever used it? The only reason to ever use VLC is if your OS literally doesn't support any other media player.

But basically: its codecs are crap, its output video quality is bad, it loves to drop frames when rendering for no apparent reason, and there are better options out there (like Media Player Classic).

Xenoveritas
May 9, 2010
Dinosaur Gum
Presumably they don't have the same framerate. Yes, yes, I know, helpful.

The reason why they wouldn't depends on the source. If the source was originally variable frame rate, then you may need to change how you're loading the video.

In any case, always use ChangeFPS to change the frame rate and never use ConvertFPS. (ConvertFPS frame-blends and looks awful.)

Xenoveritas
May 9, 2010
Dinosaur Gum

frozentreasure posted:

Why are you using FFMS2 on Fraps files? Unless I'm missing something important, you should just use AVISource.

You are, you're missing the entire discussion of how the FRAPS AVI codec doesn't deal with colors quite correctly if it recorded using its YV12 mode. (It always converts back to RGB32 and does a bad job of it.) Using the FFMpeg codec deals with that.

The T posted:

EDIT: Oh, didn't even bother to look at your sample avs; FFIndex will actually tell it "index it again", I think (never used it), so what you'll want to do is just load it normally with FFVideoSource; that way the first time it will index it, but after that it won't re-index it.

All FFIndex says is "do a complete index of this file if one hasn't been done yet." By "complete" I mean that if you use FFVideoSource FFMS2 won't bother indexing the audio stream, so if you then use FFAudioSource it then has to reindex the entire file. You can use FFIndex to ensure only one index is done.

Once an index has been done, it won't be redone unless you delete the .ffindex file.

Xenoveritas
May 9, 2010
Dinosaur Gum
A big one.

To be serious, this is the type of question you should check out the PC parts picking thread for. They'll have better answers.

Xenoveritas
May 9, 2010
Dinosaur Gum
I'm wondering if it has the fields backwards. I'd have to stop and look at the original source to get an idea of exactly what's coming out of FFVideoSource to provide a meaningful answer.

Assuming it's coming out marked as "field based", see if ComplementParity fixes it.

If that does nothing, it might be worth throwing a SelectOdd at the end to see what that does.

Adbot
ADBOT LOVES YOU

Xenoveritas
May 9, 2010
Dinosaur Gum
I think your frame order is getting screwed up somehow, but it looks to me like AviSynth is getting the frames something like 0, 2, 1, 3, 4, 6, 5, 7, … So it's not the fields being in the wrong order, the frames are being jumbled around in a larger group.

If that's the case, the good news is that it's fixable in AviSynth. But without looking closer at what AviSynth is producing, I really can't help. Plus I'm not sure what would cause that in the first place.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply