Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

42 Accepted

1 Follower

About YukinoAi

  • Rank
    Rear Guard

Contact Methods

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. YukinoAi

    Help with VFR

    You would have to refer to TIVTC's documentation for that.
  2. YukinoAi

    Help with VFR

    Again, Avisynth does not understand VFR and never will. You cannot extract VFR timecodes out of .avs files. In other to merge clips at different frame rates you can either 1) change the FPS of either clip to match. http://avisynth.nl/index.php/VFR lists various ways to change the number of frames appropriately (adding/dropping) for syncing purposes with a corresponding impact on quality. or 2) not change the FPS when merging, and use AssumeFPS() to merge the two clips of different FPS, thus changing the total length of the clip. If changing the length of the video (#2) then you would need to go back during the muxing stage and make the entire stream VFR. This involves identifying the different framerate portion of the video and writing/stitching together your own timecodes_v2.txt file unique to that video stream. The mkvmerge CLI documentation shows what this file should look like as a reference.
  3. YukinoAi

    Help with VFR

    Avisynth/Vapoursynth do not understand VFR, and never will. I recommend to not bother trying since VFR is simply not relevant there. VFR is something only relevant during playback, not during either the filtering or encoding stages, provided you treat the video/audio completely independently until the final mux. In other words, you only ever need to deal with VFR twice, once in the initial source file to extract timing information and a second time during the final mux to re-insert the VFR timing. Note that the encode settings for FPS do not matter, provided you do not add/remove frames, because the vfr_timings.txt will override them anyway in the final mux. You can screw with/implant false metadata by telling the encoder to encode at like 1fps or whatever. I normally try not to do that however, but tempting. Also, sometimes the "fix bitstream info" checkbox is required to fix the metadata, other times it messes it up, so you will have to experiment if you want the metadata to be sane-ish. Here are some templates I use for VFR encoding/muxing: ##encoding_example1.txt ::encoding VFR (x264) set video=input.avs ffmpeg -r 30000/1001 -i "%video%" -an -sn -pix_fmt yuv420p -f yuv4mpegpipe - | x264-10b - --demuxer y4m --output-csp i420 --crf 20 --preset veryslow --output "%video%.h264" ::encoding VFR (x265) ffmpeg -r 30000/1001 -i "%video%" -an -sn -pix_fmt yuv444p -f yuv4mpegpipe - | x265-10b --input - --y4m --crf 18 --preset veryslow --output "%video%.h265" ## mkvmerge_readme.txt ::Extract generic tracks: mkvextract tracks myvideo.mkv 0:myvideo.h264 ::Extracting timecodes: mkvextract timecodes_v2 myvideo.mkv 0:myvideo.txt ::Merging with timecodes: mkvmerge -o myvideo.mkv --timecodes 0:myvideo.txt myvideo.h264 ::batch_mode.txt.cmd ::generate timecodes for /f "delims==" %i in ('dir *.mkv /b') do mkvextract timecodes_v2 "%i" "0:%i.txt" ::encode audio call aencode * ::encode each avisynth script set video=[gomaabura] Etotama - SP1 [BD 1080p FLAC] [47BAB603].mkv ffmpeg -r 30000/1001 -i "%video%.avs" -an -sn -pix_fmt yuv444p -f yuv4mpegpipe - | x265-10b --input - --y4m --crf 18 --preset veryslow --output "%video%.avs.h265" ::merge mkvmerge -o "%video%.h265.mkv" "%video%.audio0.aac" --timecodes "0:%video%.txt" "%video%.avs.h265" ::batch_v2 @echo off for /f "delims==" %%i in ('dir *.mkv /b') do mkvextract timecodes_v2 "%%i" "0:%%i.txt" call aencode * set video=[gomaabura] Etotama - SP5 [BD 1080p FLAC] [93D2683C].mkv call :encodeVFR "%video%" set video=[gomaabura] Etotama - SP6 [BD 1080p FLAC] [C4CD44EE].mkv call :encodeVFR "%video%" :encodeVFR set video=%~1 ffmpeg -r 30000/1001 -i "%video%.avs" -an -sn -pix_fmt yuv444p -f yuv4mpegpipe - | x265-10b --input - --y4m --crf 18 --preset veryslow --output "%video%.avs.h265" mkvmerge -o "%video%.h265.mkv" "%video%.audio0.aac" --timecodes "0:%video%.txt" "%video%.avs.h265" goto :eof :end
  4. YukinoAi

    Help me convert this SUP to SRT

    Use Subtitle Edit.
  5. They are different methods of text alignment. Neither should be used for most OPs/EDs. Constantly going back and changing things is normal. Just try to make it easy to change things when you do.
  6. For KFX, it is important the script always match the loaded video's resolution. Doing that is not strictly necessary for editing/timing, but, for typesetting, it is important they match. Remember that words are made up of syllables, discrete sounds people make. Together groups of syllables make up each word. This process of understanding words based on syllables is so ingrained into native speakers that, internally, we interpret other languages by dissecting foreign words into syllables of our native language. This is why foreigners have thick accents. Foreigners to a language are using syllables they understand and were trained to use to speak a language with a different syllable set. The mistakes are obvious and we call the error prone process of syllable translation as "having an accent." Common errors include adding, removing and approximating sounds. In addition, where one word/syllable ends and another starts is obvious to a native speaker but this is less obvious for non-native speakers. So, sometimes merging discrete words or "splitting" a word can happen. I went ahead and downloaded Asuka's Beatless episode and it looks like, instead of tenbou , they used Te Wo. Japanese syllables do not include "ten" only "te". There is no syllable for a lone "n", unlike in English. "bo" and "wo" are nearly indistinguishable for non-fluent speakers. The context could be use to help figure that one out. The full line is "'kizutsui to, mae mui te' wo kurikaeshite yukan da" which they translated as "We'll get hurt. then [sic] face forward, time after time." That probably makes more sense than "kizutsuite /mae muite wo tenbou /kurikaeshite yukun da" as "Get hurt,/ Then face forward.../Time and time again." So, how easy is it for a native speaker, with a dialect, to be understood by foreigners if they are speaking that foreign language? It should not be surprising that you are hearing Code:Error as "error code" or "an error code" because that is probably what they are actually saying, or rather, the sounds actually made due to them not speaking English natively. The different syllable set is creating pauses differently than where a native speaker would pause. What is meant however is probably "Code:Error" and in that situation, since they are attempting to speak English, I usually just paste in the English that was meant to be said, instead of the actual sounds they made or the Japanese syllables. Random: Sometimes, the Japanese syllables can be used even if they are speaking English if it was a random English word mixed in. So a Japanese person attempting to say say "Cure" might be written as "Cure" in the OP_Translation but in the OP_Rom might be "kyua" for romaji/KFX purposes. Same with "pyua". There are no good standards to follow for this weird edge-case and typesetters/editors tend to just do w/e. Even I am not always consistent. So here is the take away: How easy is it to modify the KFX to fix the "tenbou" mistake? Mistakes will happen. They are not a big deal. Just fix them when they do. While it is more of a styling choice ultimately, the subs you used tend to split too many sentences into smaller phrases while using hard to maintain fade in/outs. Suppose that you wanted to change your romaji and translation to match Asuka. Then you would need to merge the English lines, easy enough: just concatenate them. But the romaji? The way you are doing things now would require you to concatenate the lines and then manually re-align each individual word and adjust the "pop in" timing manually. Having to reposition/retime everything would be needed for most other minor changes as well, like moving one syllable into another line to suit the translation better or when splitting lines. For KFX, you would just delete the existing KFX for those lines, concatenate the romaji and re-run the template on the merged line. The process is mostly automatic. That is what I meant by saying using per-line offsets is a lot work. Just take time time to learn how to use KaraEffector. The effects look just as nice as effects that you can create manually in Aegisub and will be easier to maintain, thus saving you time in the long run. That Beatless OPv2 looks really nice and shows effort. Translation aside, nice work on the typesetting! Modifying the colors goes a long way to making the typesetting for OPs/EDs look nice. I think Black Rock Shooter's OP by Tsundere, Twintails by Commie and Kill La Kill OP2 by FFF show different color effects well. Personally, I prefer color changes only to make sure the subs do not stand out, since I place readability above all else. The Magic Knight OP3 I did shows that.
  7. The above guidelines are just that: guidelines. Guidelines are meant to show you how other people work, allowing you to quickly incorporate the more useful bits into your existing process. If a guideline does not fit with what you are trying to do then ignore it. The reason for #4, avoiding per-line offsets, is that multiple offsets per line, or an offset for each line is a lot of work. If you are putting in that much work into each line then it makes sense to just do /k timing and KFX instead. If you like the effect of making the text appear or disappear along with the lyrics, there are plenty of KFX templates that do that. That is just as much work, will end up looking better and is easier to maintain/change. Personally, I do not like that effect since consider legibility the most important aspect of typesetting and text disappearing/appearing mid-line makes text too difficult to read. That is just a personal styling choice, and just a guideline. If you like the effect, use it. For #5 and #6, those are mostly because I do a lot of bulk modifications and need to keep things organized. For the Overlord ED, I adjusted the translation a bit to "flow" better in English and make it more obvious what the song was about. That falls under editing, and is not usually necessary for typesetting. Sometimes one specific romaji source is slightly off, like you were getting with the Beatless OP/EDs, or there are other differences. However, if every romaji has the same lyrics for a line, then it is probably just the ear playing tricks. The line that goes "mae muite wo" is missing the "tenbu" that occurs right after it. The line ends too soon. To sort out those problems I usually do the following: Gather various romaji from around the net Gather various English translations (optional) Gather various kanji create a romaji_final.txt that contains the actual as-spoken Japanese/English of the song create a english_final.txt that contains a composite translation of #4 made from all of the English translations create "song-artist-seriesOP.ass" in Aegisub by timing and typesetting romaji_final.txt duplicate the OP_Rom lines and change the style/translation to OP_Translation (optional) K-time the romaji (optional) Add KFX using KaraEffector There are always differences in romaji because the short (TV) version of the song is different than the full version. Short versions usually only truncate the song, but sometimes skip stanzas, phrases or lone words. Also sometimes there are multiple different OP/EDs based upon the same song that use different parts of the song. Houkago no Pleiades for example, kept appending stanzas until the final episode finally included the entire song. Keeping track of what exactly what was happening from the raw lyrics to the spoken lyrics would have been a nightmare if the only thing I had to work from was a ED1-typeset-partial-lyrics.ass It is really annoying and time consuming trying to make changes to lyrics/timing/KFX/translation when what you are working from is different than what is being said so it is best to avoid the issue by taking the time to generate that romaji_final.txt Again, optional, but that is what I usually do if working from scratch since it can potentially save a lot of time later on. Doing KFX on a non-existent syllable is so annoying... For the Beatless OP/EDs, the font choices are okay but font sizes too small. Most typsetters usually use vertical offsets of 20 or more for the bottom at 720p to compensate for overscan and make it easier to read. 20-30 seems typical, although I have seen 40 before. I usually do 16-20 depending upon the font. The timing between lines on both of them needs a lot of work. If two lines are spoken sequentially then their timing should be contiguous with no space in between. As per Doki's timing guide, "Gaps are bad continuity is good." If you look at the Overlord ED, there are no gaps in the timing for each line spoken reasonably close to each other. I extended the ending for each line to the start of the next. Scene changes are usually marked with a vertical purple line in Aegisub's audio viewer so it should be easy to snap the timing for a line to them if one occurs in between two lines. To make transitions look natural, instead of instantly appearing when there is no scene change, it is important to add \fade(150,150) to each line. I normally add \fade(150,150) to every OP_Rom line and then go back and modify the 150 to 0 if there is a scene change or extend it to at least 300 (usually 600+) if there are no nearby lines. To be honest, good typesetting is not that important but good timing is very important since "flashing" subs can be really annoying. For color, black is a good choice for the Overlord II ED and arguably for the Beatless OP because there is so much black in the songs and they are intense. The Beatless ED, however, is a much more lighthearted song, "light" "sky" "love" "prima" "tomorrow" "forward", so the typesetting (font and color choice) should reflect that. Font is okay, so just fix the styling. Playing around with it, I ended up with &H3B6CD8& which is a light orange, since it matches the character's hair and most of the second part of the ED. Changed: Font size 32->42, shadow=1.5->0, outline=3.75->1. I would be tempted to change colors between light-blue (first scene), light-green (second scene) and then leave it as light orange for the rest of the ED.
  8. *yawn* o.- Good morning. Disclaimer: I am not an experienced typesetter, only so-so. Here are some rules I follow when doing ops/eds: The romaji and translation must use the same styling. R: Continuity. With very narrow exceptions, no delays in showing parts of lines. Show it all at once or not at all. R: Places too much emphasis on the subs instead of the content of the subs. For special styling, use KFX instead. Contiguous fade ins/fade outs should use \fade(150,150), unless there is a scene change. Non-contiguous fade outs/ins should use 300ms or more. R: Same as #2. Unless the defaults would conflict with text on the screen, no line-specific offsets. R: Appropriate offsets in the styling are easier to maintain. OP/ED styling shall be named OP_Rom, OP_Translation, and ED_Rom, ED_Translation, respectively. Each style should be on consecutive lines instead of interlacing them. R: I work with foreign subs in bulk so this may not apply to you. OP_Rom means what the character literally said goes here, and is required to be present at all times. OP_Translation is a foreign translation, and may contain duplicate lines at OP_Rom. R: Same as #5. If you want to emphasize that certain parts of certain lines have been translated a certain way, you could have multiple styles for each line and use the \r tag to switch between them. I did that with Junk Boy. Each line looked like this: {\blur0.8}Don't touch {\rEDEnglishGreen\blur0.8}junk boy, {\rEDEnglishBlue\blur0.8}no, no {\rEDEnglishPurple\blur0.8}lonely boy. The default style was EDEnglishRed and switched between multiple colors. It may or may not be available at https://nagato.moe:4433/Misc/Videos if curious. Random: I wish blur could be specified in the style For the OP, if it would mean that foreign language viewers would need to switch between reading the top and bottom constantly to read what was said, it is okay to have duplicate lines appear on the screen, one on top and one on bottom. This is very important if every-other line is mixed Japanese/English but less important if multiple consecutive lines are English/foreign. Remember that subs should enhance, not interfere with, the viewing experience. For KFX, there are many ways to implement them, from obscure scripting languages to Adobe After Effects. The laziest is a lua Aegisub plugin called KaraEffector. If you go the KaraEffector route, read up on AssDraw as well and install "Convert clip to drawing" because then you can substitute arbitrary shapes for the hardcoded ones that are more appropriate to the specific op/ed. Click on the guides index link in my signature, and go down to the KFX section for download links/manuals and stuff. Most KFX require K-timed lines which is a subcategory of Timing, so it may help to read up on that as well. None of the fonts used in the subs you posted were appropriate. Never use ComicSans. Ever. The ED should have a cursive-style font (serif?) because the lyrics have allusions to love letters/intimacy. I usually use the NCOP/NCEDs instead of worrying matching the text on-screen but you could also do that. That OP is more open ended. For Nep-Nep, blocky, pixelated fonts were best. The point being, the font matters and I have been collecting fonts for a while. Want my \fonts folder? Edit: Okay, I am more awake now. So you can already do k-timing. Without KFX \kf timing looks better to me usually, but the simple \k one is better if you are going to do KFX. KFX from \kf timing does not go well. Also: Try to k-time each Japanese syllable. Anidb has a kana Romanisation guide and hiraga ones are in the guides list. The fonts.7z folder is on Mega under typesetting if you want it. I normally use explorer's "Preview" window to browse. I also did a v2 of the Overlord II ED from epi 6, that is available at the link provided above. It looks like the Golumpa (funi's) video stream has different timing than CR. The CR audio syncing seems more right to me. When stream copying, sometimes the first few frames are garbage and cannot be rendered when not stream-copied starting from a keyframe. Frame-accurate timing in Aegisub does not sync correctly with the Golumpa video stream when played back in MPC-HC. I think it has to do with the first few frames being garbage throwing off the renderer but I have not confirmed that. The v2 uses the transcode I worked from to avoid that issue.
  9. You can either demux the subtitles and change the font in the .ass files manually or in Aegisub, or it is possible to change the fonts dynamically during playback. To demux the subs, use ffmpeg or mkvtoolsnix with a demuxing addon like gMKVExtractGUI. ffmpeg -i video.mkv -c copy out.ass For ffmpeg, use the -map switch to specify which one to extract in multi-sub files. To change the default during playback: In MPV-HC -> rightclick the main window -> subtitle track -> make sure "Default Style" is checked, and that will override some of styling information in styled subs. Fonts cannot be changed in burned in/hardsubbed streams.
  10. YukinoAi

    DmonHiro 720p BD Quality

    I really like most of DmonHiro's encodes, just keep in mind that he does no work on the subs. They are just muxed Crunchyroll subs with the font size/position changed. In terms of A/V, they are always of acceptable quality but can be quite large sometimes since he only ever does 10-bit AVC and never filters grain. From reading his commentary over many of his releases, he seems to do a lot of remuxing and messes up the sources sometimes. I think he muxed in some ITX subs for a show that weren't timed properly because why not. So, not being sure if he used the BDs or did a 2nd transcode for this show or that show is more sloppy documentation than maliciousness. I would give him the benefit of the doubt. Actually, I did some work on the subs for "Isekai wa Smartphone to Tomo ni" and used DmonHiro's 720p A/V. If you want a 1080p version of it, free to request a patch. Just link me the 1080p source.
  11. YukinoAi

    Fixing/Adjusting Colours

    For clarity, I recommend always differentiating between filtering artifacts and encoding artifacts. Filtering artifacts occur prior to sending the stream to the encoding software (x264/x265), and encoding artifacts occur as a result of the lossy encoding settings. If you are not sure if an artifact is from filtering or if it is from encoding, check to see how it looks in Avpsmod, or do a lossless encode. libx264 @ crf=0 is lossless. If the artifact is present in the lossless encode, then it is a filtering artifact. If it is only present in the lossy encode, then it is an encoding artifact. Your screenshot comparison looks like it is showing filtering artifacts, not encoding ones, but feel free to encode lossless to double-check. Text on the screen usually does not stand up to sharpening/line-darkening really well. What you are seeing is probably a result of the line darkening and hence is normal. Personally, I would just leave it since it is a lot of work to get it perfect. However, the best solution is to use masktools to manually de-select the text for that scene, the second best is to sceneFilter (or not) that specific scene, or lower or different filter settings. There is also this hack of playing with StackHorizontal, StackVertical, trim, and concatenate (+) enough to exclude that portion of the video for those frames, although that is probably not a good idea. Taking the time to learn to use Masktools would be better. For color-correction, it is not an exact science, but more of an art really. autolevels() and GamMac() can point you in the right direction, as can looking at the histograms directly. It can also help to look at alternative scenes from the same source or different sources. Really though, it is just about having a reference of what it should look like, even if it is just in your head, and trying a lot of different settings. Tweak() should usually be used in conjunction with ColorYUV() when experimenting. Sometimes the hue is wrong Tweak(hue=-10), or the entire thing is unsaturated Tweak(sat=1.2,maxSat=20), or too saturated, Tweak(sat=0.93). For luma, I usually do Tweak(bright=4) or Levels(0, 0.95, 255, 0, 255) -the second value is the luma, "0.95" means "lower the luma by 5%"-. Try not to modify the luma without scene filtering because humans are very sensitive to it and will readily notice any distortions. Lowering it will always destroy detail and, really bright video just looks bad to me personally. If you cannot find settings you really like for the color, and luma just leave it alone. If the source has lousy color, no one will criticize you for leaving it alone since you are accurately representing the source. Most encoders, including very experienced ones, do not bother changing chroma, even when it is obviously the wrong hue/unsaturated/overlyBright.
  12. YukinoAi

    I wonder what function mux is

    With Demuxing, the idea is to take an existing file and create a new file that is a subset of the original file. In other words, having at least 2 files, where there was one, at least briefly. Demuxing does not necessarily imply that the original file is being modified so "taking out" is not accurate. This usage is closer to "extracting" rather than "taking out" since nothing is being removed and also not "separating" because the original is still intact. The original is not being "separated" but rather "copied/duplicated" in part hence "extracting." Extracting here means "to extract from an archive." The archive is not modified afterwards. In addition to the previous subset creation operation, when some people say "Demuxing" they mean that in addition to taking an existing file and creating a new file that is a subset of the original, they also intend to delete the original file and rename the subset file with the original file name. In other words "Demux" one or more components "out" of the file with the aim of "deleting." However, Demuxing does not require deleting so this other use for "Demuxing" that focuses on "deleting" is this better off called "Remuxing" since it is technologically unfeasable to delete anything via stream targeting. "Demuxing deletions" are actually muxes internally. I.E. "Demuxing" an audio track from dual-audio.mkv into single-audio.mkv is not "demuxing", but rather "remuxing" because you must initialize a muxer and then inform it to not mux some of the contents into the new file. In other words, copying, via a muxer into a new file: remuxing. So, in practical terms, since "demuxing deletions" require the initilization a muxer, this implies this operation is actually "remuxing" not "demuxing." Demuxing by extraction theoretically only needs to start and end point to copy from existing file and does not necessarily need to initialize a muxer. Muxing is useful for distro, and demuxing is useful to obtain sources to work with. As in demux the eng.aac from a funi release and mux it with JPN_BD.m2ts streams to create a dual-audio.bdremux.mkv file.
  13. YukinoAi

    Change video header

    Writing application/library is container-level info and easy to remove. Just refer to the meta-data omission paramaters in ffmpeg. (-map_metadata -1). In other words, just change/modify the container. For encoder settings, the issue is that information is part of the video stream itself. The video.avc or video.hevc stream carries that info. In order to strip it, you would need a tool that understands the encoded stream and modifies it appropriately or an encoder that does not insert it to begin with. The laziest way is just to transcode the stream and not insert the meta data into the new stream. ffmpeg -i input.mkv -c:v libx265 -x265-params no-info=1 -c:a copy out.h265.mkv The "no-info" flag also works if using x265.exe directly. AVC is a bit more difficult since most AVC implementations use libx264 and the x264 developers are explicitly hostile to the concept. Technically, it should be possible to write a patch for libx264 and compile a ffmpeg.noMetadata.exe that does this and then redistribute it (or x264.exe). Some are probably floating around the internet somewhere. Now for the fun part: Due to some bugs related to stream handing, certain versions of mkvmerge remove part of the encoding info from .hevc streams. I am not sure if some versions are bugged the same way with .avc streams. So the concept is "muxing sometimes remove metainfo." Other muxing programs (not ffmpeg/mkvtoolsnix) sometimes did not implement the copying meta-data code correctly or figured it would be too much hastle or whatever, so when used, they strip metadata from the streams. There are a lot out there, like mp4box and tsmuxer (eac3to dependency). Use TSmuxer, or an alternative, and mux the file.mkv into a avc.stream.ts and then mux it back to mkv/mp4. That will strip most of the metadata from AVC streams. Example: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : High 10@L4.1 Format settings, CABAC : Yes Format settings, ReFrames : 9 frames Muxing mode : Header stripping Codec ID : V_MPEG4/ISO/AVC Duration : 1mn 30s Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 23.976 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 10 bits Scan type : Progressive Writing library : x264 core 125 r2208 d9d2288 Encoding settings : cabac=1 / ref=9 / deblock=1:-2:-2 / analyse=0x3:0x133 / me=umh / subme=9 / psy=1 / psy_rd=0.60:0.00 / mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=3 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=0 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=23 / scenecut=40 / intra_refresh=0 / rc=crf / mbtree=0 / crf=17.0 / qcomp=0.60 / qpmin=10 / qpmax=38 / qpstep=4 / ip_ratio=1.40 / pb_ratio=1.30 / aq=2:0.60 Language : English Default : Yes Forced : No Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Becomes... Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : High 10@L4.1 Format settings, CABAC : Yes Format settings, ReFrames : 9 frames Codec ID : V_MPEG4/ISO/AVC Duration : 1mn 30s Bit rate mode : Variable Maximum bit rate : 40.0 Mbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 23.976 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 10 bits Scan type : Progressive Language : English Default : Yes Forced : No Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 It is also possible to use custom ffmpeg builds with the enabled bit-stream filters -bsf, -bsf:v, to alter the playback info like fps and matrixes and insert arbitrary strings+value pairs.
  14. YukinoAi

    Ordered Chapters removal

  15. I am not sure what you are asking. Are you trying to hardcode subs into the video stream? Or did you mean to ask about captions/subtitles manipulation? There are a lot of ways to hardcode subs. The easiest/laziest is Handbrake (recommended), the most versatile are probably ass-render techniques for avisynth/vapoursynth. For general subtitle manipulation and format conversion, use "Subtitle Edit". For .ass subtitles specifically, Aegisub would be more specialized. It is also possible for many players to load subtitles dynamically during playback if named similarly to the video file. If you do not want to encode the video but still want to have access to the subtitles, then it is possible to use mkvtoolsnix to mux the two together into mkv files. The mp4 container does not robustly support subtitles so I would recommend against using it if you would like to work with internal subtitles at the video stream's native quality. There is also software like Plex/Emby that can dynamically transcode video and embed the subtitles (hardsub) from softsub.mkv files if mkv compatibility is an issue.