Jump to content
  • Announcements

    • IkarosBD

      [IMPORTANT] Maintenance Complete   03/17/2018

      Server maintenance is completed! Please read this announcement to get a detailed list of changes made. Also, be sure to report any errors or anomalies to IkarosBD via PM.   Changes made during maintenance: SSL is now handled by the popular OpenSSL library. Previously, it used the GnuTLS library which has had performance issues. Our webserver (Apache) was upgraded to the latest available version. The server now is able to serve content over the HTTP/2 protocol, if supported by your browser. HTTP/2 should help make requests go a bit faster in most cases. The server's request processing module was changed out to a much more robust and quicker one. This was necessary to support HTTP/2.
    • IkarosBD

      [CRITICAL] DNS resolution error   03/17/2018

      UPDATE: Google's public DNS resolvers initially had problems resolving the kametsu.com domain name after we switched nameservers, so anyone using Google's public resolvers was getting a name resolution error for a while. I have since used Google's tool to flush their public resolver DNS cache for the domain, and their nameservers appear to be successfully resolving the domain again. This will take time to actually start working for everyone using their public DNS, but it at least should be on its way to being fixed. Sorry about that.


  • Content count

  • Joined

  • Last visited

Community Reputation

39 Accepted

1 Follower

About YukinoAi

  • Rank
    Rear Guard

Contact Methods

Profile Information

  • Gender
    Not Telling
  1. The above guidelines are just that: guidelines. Guidelines are meant to show you how other people work, allowing you to quickly incorporate the more useful bits into your existing process. If a guideline does not fit with what you are trying to do then ignore it. The reason for #4, avoiding per-line offsets, is that multiple offsets per line, or an offset for each line is a lot of work. If you are putting in that much work into each line then it makes sense to just do /k timing and KFX instead. If you like the effect of making the text appear or disappear along with the lyrics, there are plenty of KFX templates that do that. That is just as much work, will end up looking better and is easier to maintain/change. Personally, I do not like that effect since consider legibility the most important aspect of typesetting and text disappearing/appearing mid-line makes text too difficult to read. That is just a personal styling choice, and just a guideline. If you like the effect, use it. For #5 and #6, those are mostly because I do a lot of bulk modifications and need to keep things organized. For the Overlord ED, I adjusted the translation a bit to "flow" better in English and make it more obvious what the song was about. That falls under editing, and is not usually necessary for typesetting. Sometimes one specific romaji source is slightly off, like you were getting with the Beatless OP/EDs, or there are other differences. However, if every romaji has the same lyrics for a line, then it is probably just the ear playing tricks. The line that goes "mae muite wo" is missing the "tenbu" that occurs right after it. The line ends too soon. To sort out those problems I usually do the following: Gather various romaji from around the net Gather various English translations (optional) Gather various kanji create a romaji_final.txt that contains the actual as-spoken Japanese/English of the song create a english_final.txt that contains a composite translation of #4 made from all of the English translations create "song-artist-seriesOP.ass" in Aegisub by timing and typesetting romaji_final.txt duplicate the OP_Rom lines and change the style/translation to OP_Translation (optional) K-time the romaji (optional) Add KFX using KaraEffector There are always differences in romaji because the short (TV) version of the song is different than the full version. Short versions usually only truncate the song, but sometimes skip stanzas, phrases or lone words. Also sometimes there are multiple different OP/EDs based upon the same song that use different parts of the song. Houkago no Pleiades for example, kept appending stanzas until the final episode finally included the entire song. Keeping track of what exactly what was happening from the raw lyrics to the spoken lyrics would have been a nightmare if the only thing I had to work from was a ED1-typeset-partial-lyrics.ass It is really annoying and time consuming trying to make changes to lyrics/timing/KFX/translation when what you are working from is different than what is being said so it is best to avoid the issue by taking the time to generate that romaji_final.txt Again, optional, but that is what I usually do if working from scratch since it can potentially save a lot of time later on. Doing KFX on a non-existent syllable is so annoying... For the Beatless OP/EDs, the font choices are okay but font sizes too small. Most typsetters usually use vertical offsets of 20 or more for the bottom at 720p to compensate for overscan and make it easier to read. 20-30 seems typical, although I have seen 40 before. I usually do 16-20 depending upon the font. The timing between lines on both of them needs a lot of work. If two lines are spoken sequentially then their timing should be contiguous with no space in between. As per Doki's timing guide, "Gaps are bad continuity is good." If you look at the Overlord ED, there are no gaps in the timing for each line spoken reasonably close to each other. I extended the ending for each line to the start of the next. Scene changes are usually marked with a vertical purple line in Aegisub's audio viewer so it should be easy to snap the timing for a line to them if one occurs in between two lines. To make transitions look natural, instead of instantly appearing when there is no scene change, it is important to add \fade(150,150) to each line. I normally add \fade(150,150) to every OP_Rom line and then go back and modify the 150 to 0 if there is a scene change or extend it to at least 300 (usually 600+) if there are no nearby lines. To be honest, good typesetting is not that important but good timing is very important since "flashing" subs can be really annoying. For color, black is a good choice for the Overlord II ED and arguably for the Beatless OP because there is so much black in the songs and they are intense. The Beatless ED, however, is a much more lighthearted song, "light" "sky" "love" "prima" "tomorrow" "forward", so the typesetting (font and color choice) should reflect that. Font is okay, so just fix the styling. Playing around with it, I ended up with &H3B6CD8& which is a light orange, since it matches the character's hair and most of the second part of the ED. Changed: Font size 32->42, shadow=1.5->0, outline=3.75->1. I would be tempted to change colors between light-blue (first scene), light-green (second scene) and then leave it as light orange for the rest of the ED.
  2. *yawn* o.- Good morning. Disclaimer: I am not an experienced typesetter, only so-so. Here are some rules I follow when doing ops/eds: The romaji and translation must use the same styling. R: Continuity. With very narrow exceptions, no delays in showing parts of lines. Show it all at once or not at all. R: Places too much emphasis on the subs instead of the content of the subs. For special styling, use KFX instead. Contiguous fade ins/fade outs should use \fade(150,150), unless there is a scene change. Non-contiguous fade outs/ins should use 300ms or more. R: Same as #2. Unless the defaults would conflict with text on the screen, no line-specific offsets. R: Appropriate offsets in the styling are easier to maintain. OP/ED styling shall be named OP_Rom, OP_Translation, and ED_Rom, ED_Translation, respectively. Each style should be on consecutive lines instead of interlacing them. R: I work with foreign subs in bulk so this may not apply to you. OP_Rom means what the character literally said goes here, and is required to be present at all times. OP_Translation is a foreign translation, and may contain duplicate lines at OP_Rom. R: Same as #5. If you want to emphasize that certain parts of certain lines have been translated a certain way, you could have multiple styles for each line and use the \r tag to switch between them. I did that with Junk Boy. Each line looked like this: {\blur0.8}Don't touch {\rEDEnglishGreen\blur0.8}junk boy, {\rEDEnglishBlue\blur0.8}no, no {\rEDEnglishPurple\blur0.8}lonely boy. The default style was EDEnglishRed and switched between multiple colors. It may or may not be available at https://nagato.moe:4433/Misc/Videos if curious. Random: I wish blur could be specified in the style For the OP, if it would mean that foreign language viewers would need to switch between reading the top and bottom constantly to read what was said, it is okay to have duplicate lines appear on the screen, one on top and one on bottom. This is very important if every-other line is mixed Japanese/English but less important if multiple consecutive lines are English/foreign. Remember that subs should enhance, not interfere with, the viewing experience. For KFX, there are many ways to implement them, from obscure scripting languages to Adobe After Effects. The laziest is a lua Aegisub plugin called KaraEffector. If you go the KaraEffector route, read up on AssDraw as well and install "Convert clip to drawing" because then you can substitute arbitrary shapes for the hardcoded ones that are more appropriate to the specific op/ed. Click on the guides index link in my signature, and go down to the KFX section for download links/manuals and stuff. Most KFX require K-timed lines which is a subcategory of Timing, so it may help to read up on that as well. None of the fonts used in the subs you posted were appropriate. Never use ComicSans. Ever. The ED should have a cursive-style font (serif?) because the lyrics have allusions to love letters/intimacy. I usually use the NCOP/NCEDs instead of worrying matching the text on-screen but you could also do that. That OP is more open ended. For Nep-Nep, blocky, pixelated fonts were best. The point being, the font matters and I have been collecting fonts for a while. Want my \fonts folder? Edit: Okay, I am more awake now. So you can already do k-timing. Without KFX \kf timing looks better to me usually, but the simple \k one is better if you are going to do KFX. KFX from \kf timing does not go well. Also: Try to k-time each Japanese syllable. Anidb has a kana Romanisation guide and hiraga ones are in the guides list. The fonts.7z folder is on Mega under typesetting if you want it. I normally use explorer's "Preview" window to browse. I also did a v2 of the Overlord II ED from epi 6, that is available at the link provided above. It looks like the Golumpa (funi's) video stream has different timing than CR. The CR audio syncing seems more right to me. When stream copying, sometimes the first few frames are garbage and cannot be rendered when not stream-copied starting from a keyframe. Frame-accurate timing in Aegisub does not sync correctly with the Golumpa video stream when played back in MPC-HC. I think it has to do with the first few frames being garbage throwing off the renderer but I have not confirmed that. The v2 uses the transcode I worked from to avoid that issue.
  3. You can either demux the subtitles and change the font in the .ass files manually or in Aegisub, or it is possible to change the fonts dynamically during playback. To demux the subs, use ffmpeg or mkvtoolsnix with a demuxing addon like gMKVExtractGUI. ffmpeg -i video.mkv -c copy out.ass For ffmpeg, use the -map switch to specify which one to extract in multi-sub files. To change the default during playback: In MPV-HC -> rightclick the main window -> subtitle track -> make sure "Default Style" is checked, and that will override some of styling information in styled subs. Fonts cannot be changed in burned in/hardsubbed streams.
  4. DmonHiro 720p BD Quality

    I really like most of DmonHiro's encodes, just keep in mind that he does no work on the subs. They are just muxed Crunchyroll subs with the font size/position changed. In terms of A/V, they are always of acceptable quality but can be quite large sometimes since he only ever does 10-bit AVC and never filters grain. From reading his commentary over many of his releases, he seems to do a lot of remuxing and messes up the sources sometimes. I think he muxed in some ITX subs for a show that weren't timed properly because why not. So, not being sure if he used the BDs or did a 2nd transcode for this show or that show is more sloppy documentation than maliciousness. I would give him the benefit of the doubt. Actually, I did some work on the subs for "Isekai wa Smartphone to Tomo ni" and used DmonHiro's 720p A/V. If you want a 1080p version of it, free to request a patch. Just link me the 1080p source.
  5. Fixing/Adjusting Colours

    For clarity, I recommend always differentiating between filtering artifacts and encoding artifacts. Filtering artifacts occur prior to sending the stream to the encoding software (x264/x265), and encoding artifacts occur as a result of the lossy encoding settings. If you are not sure if an artifact is from filtering or if it is from encoding, check to see how it looks in Avpsmod, or do a lossless encode. libx264 @ crf=0 is lossless. If the artifact is present in the lossless encode, then it is a filtering artifact. If it is only present in the lossy encode, then it is an encoding artifact. Your screenshot comparison looks like it is showing filtering artifacts, not encoding ones, but feel free to encode lossless to double-check. Text on the screen usually does not stand up to sharpening/line-darkening really well. What you are seeing is probably a result of the line darkening and hence is normal. Personally, I would just leave it since it is a lot of work to get it perfect. However, the best solution is to use masktools to manually de-select the text for that scene, the second best is to sceneFilter (or not) that specific scene, or lower or different filter settings. There is also this hack of playing with StackHorizontal, StackVertical, trim, and concatenate (+) enough to exclude that portion of the video for those frames, although that is probably not a good idea. Taking the time to learn to use Masktools would be better. For color-correction, it is not an exact science, but more of an art really. autolevels() and GamMac() can point you in the right direction, as can looking at the histograms directly. It can also help to look at alternative scenes from the same source or different sources. Really though, it is just about having a reference of what it should look like, even if it is just in your head, and trying a lot of different settings. Tweak() should usually be used in conjunction with ColorYUV() when experimenting. Sometimes the hue is wrong Tweak(hue=-10), or the entire thing is unsaturated Tweak(sat=1.2,maxSat=20), or too saturated, Tweak(sat=0.93). For luma, I usually do Tweak(bright=4) or Levels(0, 0.95, 255, 0, 255) -the second value is the luma, "0.95" means "lower the luma by 5%"-. Try not to modify the luma without scene filtering because humans are very sensitive to it and will readily notice any distortions. Lowering it will always destroy detail and, really bright video just looks bad to me personally. If you cannot find settings you really like for the color, and luma just leave it alone. If the source has lousy color, no one will criticize you for leaving it alone since you are accurately representing the source. Most encoders, including very experienced ones, do not bother changing chroma, even when it is obviously the wrong hue/unsaturated/overlyBright.
  6. I wonder what function mux is

    With Demuxing, the idea is to take an existing file and create a new file that is a subset of the original file. In other words, having at least 2 files, where there was one, at least briefly. Demuxing does not necessarily imply that the original file is being modified so "taking out" is not accurate. This usage is closer to "extracting" rather than "taking out" since nothing is being removed and also not "separating" because the original is still intact. The original is not being "separated" but rather "copied/duplicated" in part hence "extracting." Extracting here means "to extract from an archive." The archive is not modified afterwards. In addition to the previous subset creation operation, when some people say "Demuxing" they mean that in addition to taking an existing file and creating a new file that is a subset of the original, they also intend to delete the original file and rename the subset file with the original file name. In other words "Demux" one or more components "out" of the file with the aim of "deleting." However, Demuxing does not require deleting so this other use for "Demuxing" that focuses on "deleting" is this better off called "Remuxing" since it is technologically unfeasable to delete anything via stream targeting. "Demuxing deletions" are actually muxes internally. I.E. "Demuxing" an audio track from dual-audio.mkv into single-audio.mkv is not "demuxing", but rather "remuxing" because you must initialize a muxer and then inform it to not mux some of the contents into the new file. In other words, copying, via a muxer into a new file: remuxing. So, in practical terms, since "demuxing deletions" require the initilization a muxer, this implies this operation is actually "remuxing" not "demuxing." Demuxing by extraction theoretically only needs to start and end point to copy from existing file and does not necessarily need to initialize a muxer. Muxing is useful for distro, and demuxing is useful to obtain sources to work with. As in demux the eng.aac from a funi release and mux it with JPN_BD.m2ts streams to create a dual-audio.bdremux.mkv file.
  7. Change video header

    Writing application/library is container-level info and easy to remove. Just refer to the meta-data omission paramaters in ffmpeg. (-map_metadata -1). In other words, just change/modify the container. For encoder settings, the issue is that information is part of the video stream itself. The video.avc or video.hevc stream carries that info. In order to strip it, you would need a tool that understands the encoded stream and modifies it appropriately or an encoder that does not insert it to begin with. The laziest way is just to transcode the stream and not insert the meta data into the new stream. ffmpeg -i input.mkv -c:v libx265 -x265-params no-info=1 -c:a copy out.h265.mkv The "no-info" flag also works if using x265.exe directly. AVC is a bit more difficult since most AVC implementations use libx264 and the x264 developers are explicitly hostile to the concept. Technically, it should be possible to write a patch for libx264 and compile a ffmpeg.noMetadata.exe that does this and then redistribute it (or x264.exe). Some are probably floating around the internet somewhere. Now for the fun part: Due to some bugs related to stream handing, certain versions of mkvmerge remove part of the encoding info from .hevc streams. I am not sure if some versions are bugged the same way with .avc streams. So the concept is "muxing sometimes remove metainfo." Other muxing programs (not ffmpeg/mkvtoolsnix) sometimes did not implement the copying meta-data code correctly or figured it would be too much hastle or whatever, so when used, they strip metadata from the streams. There are a lot out there, like mp4box and tsmuxer (eac3to dependency). Use TSmuxer, or an alternative, and mux the file.mkv into a avc.stream.ts and then mux it back to mkv/mp4. That will strip most of the metadata from AVC streams. Example: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : High 10@L4.1 Format settings, CABAC : Yes Format settings, ReFrames : 9 frames Muxing mode : Header stripping Codec ID : V_MPEG4/ISO/AVC Duration : 1mn 30s Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 23.976 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 10 bits Scan type : Progressive Writing library : x264 core 125 r2208 d9d2288 Encoding settings : cabac=1 / ref=9 / deblock=1:-2:-2 / analyse=0x3:0x133 / me=umh / subme=9 / psy=1 / psy_rd=0.60:0.00 / mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=3 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=0 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=23 / scenecut=40 / intra_refresh=0 / rc=crf / mbtree=0 / crf=17.0 / qcomp=0.60 / qpmin=10 / qpmax=38 / qpstep=4 / ip_ratio=1.40 / pb_ratio=1.30 / aq=2:0.60 Language : English Default : Yes Forced : No Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Becomes... Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : High 10@L4.1 Format settings, CABAC : Yes Format settings, ReFrames : 9 frames Codec ID : V_MPEG4/ISO/AVC Duration : 1mn 30s Bit rate mode : Variable Maximum bit rate : 40.0 Mbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 23.976 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 10 bits Scan type : Progressive Language : English Default : Yes Forced : No Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 It is also possible to use custom ffmpeg builds with the enabled bit-stream filters -bsf, -bsf:v, to alter the playback info like fps and matrixes and insert arbitrary strings+value pairs.
  8. Ordered Chapters removal

  9. Let me know how to make subtitles on MP4 videos

    I am not sure what you are asking. Are you trying to hardcode subs into the video stream? Or did you mean to ask about captions/subtitles manipulation? There are a lot of ways to hardcode subs. The easiest/laziest is Handbrake (recommended), the most versatile are probably ass-render techniques for avisynth/vapoursynth. For general subtitle manipulation and format conversion, use "Subtitle Edit". For .ass subtitles specifically, Aegisub would be more specialized. It is also possible for many players to load subtitles dynamically during playback if named similarly to the video file. If you do not want to encode the video but still want to have access to the subtitles, then it is possible to use mkvtoolsnix to mux the two together into mkv files. The mp4 container does not robustly support subtitles so I would recommend against using it if you would like to work with internal subtitles at the video stream's native quality. There is also software like Plex/Emby that can dynamically transcode video and embed the subtitles (hardsub) from softsub.mkv files if mkv compatibility is an issue.
  10. While it is true that the connection to the IRC server is potentially secure, transfers over DCC are always unencrypted. DCC is an "extension" to the IRC protocol and uses it's own TCP connection. The idea with DCC is that the IRC network is used to agree on a random source port for the client and for the server will designate a port to listen on. Those listening ports are not necessarily random, but in practice, tend to be sequential, especially if needing to port forward those port to a locally hosted bot. In other words, while the IRC connection maybe be over TLS, XDCC transfers are cleartext and easily blockable, independently of the IRC network connection, by any ISP that cares.
  11. This right here is impossible, and you've already lost before you've even started. Technically, you could encode a .hevc stream into .h264 in lossless mode and it would make it more compatible without quality loss. Since it is lossy->lossless (e.g. not transcoding), there would be no quality loss. The resulting stream would be quite large however. From the design perspective, GUIs make sense if what you are trying to do is reasonably straightforward and either 1) there are not many possible states or 2) you agree not to expose all possible states to the user. Encoding is exactly the opposite. There are endless states, encoding settings, some of which are appropriate to certain streams but not others. Which options make sense for the particular stream is something you get by understanding video/encoding and filtering theory. So, it makes more sense to choose which states you would like to invoke dynamically as needed by looking at the source and your own repository of knowledge. Command line interfaces (CLI) are better for that. Filtering (for AA, debanding, and de-noise) works similarly. That is not to say there is not software that can perform encoding or filtering in a user-friendly way with a limited set of options. Rather, either way, you will need to rely on you knowledge and experience of handling video to determine what settings to use, and having a limited set of options that which GUIs tend to impose is not always a worthwhile trade for user/mouse-friendlyness. Then... why are you bothering to encode at all? Think about it. If you honestly think video bit-rate does not matter then just do a DVD or BD mux of the original. Done. If the original has issues like banding and whatnot, then filter it, and encode losslessly. It is about ~12GB per 24 min episode using x264 btw. Video bit rate is not a problem for you right? Right? Encoders that transcode, including from BDMV or raw DVDs normally try to maintain good quality relative to the bitrate they allocate but transcoding is, by-definition, a lossly process. If the Beatrice-Raws transcode looks the same, visually, as the BDMV then the difference in bitrate between the BDMV and the encode is what people refer to as "placebo quality." No perceivable quality retained when compared to another stream of much higher bitrate. If you do not agree with the premise of transcoding, which is to remove placebo quality, then you should only accept raw sources (BDMV, RawDVD, WebRips as appropriate), or losslessly encoded video. If you want to learn about encoding, check the signature link of my post and go down to the "Encoding" section. Keep an open mind, like being flexible as to whether it has a GUI or CLI or the various compromises each encoder makes when deciding upon all the various settings. One of those hot-button areas when deciding on what to compromise on is compatibility. Typically, the more tweak settings to increase quality, the less compatible the stream. If you want perfect compatibility, use software like https://www.plex.tv/ or https://emby.media/ which of course compromises quality. Want it to not compromise quality? Then do lossless mode, which decreases compatibility with your network/computer/tv etc. Learning starts when you agree you know nothing about a subject. Learning ends (or never begins) when you have made up your mind about it.
  12. Need help regarding audio

    It is just an index of other people's work. The credit goes to the writters of the guides. If you have any more you think belong on the index, let me know and I can include them.
  13. Need help regarding audio

    Okay. I am more awake now. As an addendum... Personally, I would use DVD decryptor to merge the raw stream.vobs together from the dvd.iso based on the dvd's internal chapters. Then DGdecode or ffmpeg to demux the audio out. For the final muxing MKVToolsNix can be used. If the track does not sync properly, and as long as the DVDs are not PAL, you should be able to either set an offset in mkvtoolsgui until they do sync or cut the stream using "ffmpeg -i demuxed.ac3 -c copy -ss 00:00:03 demuxed.cut.ac3" For PAL DVDs, the audio will sync at first but then desync towards the end of every epi. Those audio streams might need to be lengthened, or stretched out. That can be done in Audacity.
  14. Need help regarding audio

    So essentially, you are trying to take a file with video + audio streams and add-in/swap-in a different audio stream. That second audio stream has not yet been demuxed from the raw DVDs. Click on my signature link. Go to the Muxing section. Start reading. Good luck.
  15. Check my signature link. Scroll down to the filter section. See you in a month or two.