반응형
3. Adaptive Streaming Internals
In this section we review some of the details of HLS, HDS and MSS. Each of these protocols has strengths and weaknesses, which we discuss in the following sections.
Figure 4. A comparison of HLS and MSS/HDS: The latter can create aggregate formats that can be distributed on a CDN for VoD, whereas for live video, all distribute chunks on the CDN. MSS allows audio and video to be aggregated and delivered separately, but HDS and HLS deliver these together.
그림 4. HLS와 MSS / HDS의 비교 : 후자는 VoD 용 CDN에 배포 할 수있는 집계 형식을 만들 수 있지만 라이브 비디오에서는 모두 CDN에 청크를 배포합니다. MSS를 사용하면 오디오와 비디오를 집계하여 별도로 제공 할 수 있지만 HDS와 HLS는이를 함께 제공합니다.
Apple HTTP Live Streaming (HLS)
Ironically, unlike Microsoft and Adobe, Apple chose not to use the ISO MPEG file format – a format based on Apple’s MOV file format – in its adaptive streaming technology. Instead, HLS takes an MPEG-2 TS and segments it to a sequence of MPEG-2 TS files which encapsulate both the audio and video. These segments are placed on any HTTP server along with the playlist files. The playlist (or index) manifest file is a text file (based on Winamp’s original m3u file format) with an m3u8 extension. Full details can be found in [HLS].
HLS defines two types of playlist files: normal and variant. The normal playlist file lists URLs that point to chunks that should be played sequentially. The variant playlist files points to a collection of different normal playlist files, one for each output profile. Metadata is carried in the playlist files as comments – lines preceded by ‘#’. In the case of normal playlist files, this metadata includes a sequence number used to associate chunks from different profiles, information about the chunk duration, a directive signaling whether chunks can be cached, the location of decryption keys, the type of stream, and time information. In the case of a variant playlist the metadata includes the bitrate of the profile, its resolution, its codec, and an ID that can be used to associate different encodings of the same content. Figure 5 and Figure 6 show a sample HLS variant playlist file and normal playlist file.
Figure 5. An HLS variant playlist file showing eight output profiles with different bitrates. The URLs for the m3u8 files are relative, but could include the leading ‘http://…’. In this example, each profile’s playlist is in a separate path component of the URL.
그림 5. 서로 다른 비트율을 가진 8 개의 출력 프로파일을 보여주는 HLS 변형 재생 목록 파일 m3u8 파일의 URL은 상대 URL이지만 'http : // ...'가 포함될 수 있습니다. 이 예에서 각 프로필의 재생 목록은 URL의 별도 경로 구성 요소에 있습니다.
Figure 6. An HLS playlist file from a live stream showing the three latest available TS chunks. The #EXT-X-MEDIA-SEQUENCE tag identifies the sequence number of the first chunk, 505.ts; it is used to align chunks from different profiles. Note that the chunk name carries no streaming-specific information. The #EXT-X-TARGETDURATION:11 tag is the expected duration (10 seconds) of the chunks, though durations can vary. The #EXT-X-KEY:METHOD=NONE tag shows that no encryption was used in this sequence. The #EXTINF:10 tags show the duration of each segment. As in the variant playlist file, the URLs are relative to the base URL used to fetch the playlist.
그림 6. 사용 가능한 최신 TS 청크 3 개를 보여주는 라이브 스트림의 HLS 재생 목록 파일 # EXT-X-MEDIA-SEQUENCE 태그는 첫 번째 청크의 시퀀스 번호 인 505.ts를 식별합니다. 다른 프로파일의 청크를 정렬하는 데 사용됩니다. 청크 이름에는 스트리밍 관련 정보가 없습니다. # EXT-X-TARGETDURATION : 11 태그는 청크의 예상 지속 시간 (10 초)이지만 지속 시간은 다를 수 있습니다. # EXT-X-KEY : METHOD = NONE 태그는이 시퀀스에서 암호화가 사용되지 않았 음을 나타냅니다. #EXTINF : 10 태그는 각 세그먼트의 재생 시간을 표시합니다. 변형 재생 목록 파일에서와 마찬가지로 URL은 재생 목록을 가져 오는 데 사용 된 기본 URL을 기준으로합니다.
In HLS, a playlist file corresponding to a live stream must be repeatedly downloaded so that the client can know the URLs of the most recently available chunks. The playlist is downloaded every time a chunk is played, and thus, in order to minimize the number of these requests, Apple recommends a relatively long chunk duration of 10 seconds. However, the size of the playlist file is small compared with any video content, and the client maintains an open TCP connection to the server, so that this network load is not significant. Shorter chunk durations can thus be used. This allows the client to adapt bitrates more quickly. VoD playlists are distinguished from live playlists by the #EXT-X-PLAYLIST-TYPE and #EXT-X-ENDLIST tags.
HLS is the only protocol that doesn’t require chunks to start with IDR frames. It can download chunks from two profiles and switch the decoder between profiles on an IDR frame that occurs in the middle of a chunk. However, doing this incurs extra bandwidth consumption, as two chunks corresponding to the same portion of video are downloaded at the same time.
HLS Considerations
Some advantages of HLS are:
HLS 고려 사항
HLS의 장점은 다음과 같습니다.
It is a simple protocol that is easy to modify. The playlists are easily accessible and their text format lends to simple modification for applications such a re-broadcast or ad insertion.
The use of TS files means that there is a rich ecosystem for testing and verifying file conformance.
TS files can carry other metadata, such as SCTE 35 cues or ID3 tags (see [HLSID3]).
HLS is native to popular iOS devices, the users of which are accustomed to paying for apps and other services. That is, HLS is more easily monetized. Some disadvantages of HLS are:
HLS is not supported natively on Windows OS platforms.
TS files mux the audio, video and data together. This means that multi-language support either comes at the cost of sending all the languages in the chunks or creating duplicate chunks with each language. Similarly for data PIDs, these are either muxed together or multiple versions of chunks are needed with different data PIDs.
There is no standard aggregate format for HLS, which means that many chunk files are created. A day’s worth of one channel with eight profiles at 10-second chunk duration will consist of almost 70,000 files. Managing such a large collection of files is not convenient.
쉽게 수정할 수있는 간단한 프로토콜입니다. 재생 목록은 쉽게 액세스 할 수 있으며 텍스트 형식은 재방송 또는 광고 삽입과 같은 응용 프로그램의 간단한 수정을 가능하게합니다.
TS 파일을 사용한다는 것은 파일 적합성을 테스트하고 확인하기위한 풍부한 생태계가 있음을 의미합니다.
TS 파일은 SCTE 35 큐 또는 ID3 태그와 같은 다른 메타 데이터를 전달할 수 있습니다 ([HLSID3] 참조).
HLS는 인기있는 iOS 장치의 기본 기능으로, 사용자는 앱 및 기타 서비스 비용을 지불하는 데 익숙합니다. 즉, HLS는 더 쉽게 수익을 창출합니다. HLS의 단점은 다음과 같습니다.
HLS는 Windows OS 플랫폼에서 기본적으로 지원되지 않습니다.
TS 파일은 오디오, 비디오 및 데이터를 함께 결합합니다. 즉, 다중 언어 지원은 모든 언어를 청크로 보내거나 각 언어로 중복 청크를 작성하는 비용이 든다는 의미입니다. 데이터 PID에 대해서도 이와 같이 다중화되거나 다른 데이터 PID로 여러 버전의 청크가 필요합니다.
HLS의 표준 집계 형식이 없으므로 많은 청크 파일이 만들어집니다. 하루 10 분짜리 청크 기간에 8 개의 프로필이있는 한 채널의 가치는 약 70,000 개의 파일로 구성됩니다. 이렇게 많은 양의 파일을 관리하는 것은 편리하지 않습니다.
Note: iOS 5 offers features that loosen some of these limitations, but the distribution of iOS versions means that these disadvantages persist in the market today.
Microsoft Silverlight Smooth Streaming (MSS)
Microsoft Silverlight 부드러운 스트리밍 (MSS)
Silverlight Smooth Streaming delivers streams as a sequence of ISO MPEG-4 files (see [MSS] and [MP4]). These are typically pushed by an encoder to a Microsoft IIS server (using HTTP POST), which aggregates them for each profile into an ‘ismv’ file for video and an ‘isma’ file for audio. The IIS server also creates an XML manifest file that contains information about the bitrates and resolutions of the available profiles (see Figure 7). When the request for the manifest comes from a Microsoft IIS server, it has a specific format:
http://{serverName}/{PublishingPointPath}/{PublishingPointName}.isml/manifest
The PublishingPointPath and PublishingPointName are derived from the IIS configuration.
Unlike HLS, in which the URLs are given explicitly in the playlist, in MSS the manifest files contain information that allows the client to create a RESTful URL request based on timing information in the stream. For live streaming, the client doesn’t need to repeatedly download a manifest – it computes the URLs for the chunks in each profile directly. The segments are extracted from the ismv and isma files and served as ‘fragmented’ ISO MPEG-4 (fMP4) files. MSS (optionally) separates the audio and video into separate chunks and combines them in the player. The URLs below show typical requests for video and audio. The QualityLevel indicates the profile and the video= and audio-eng= indicate the specific chunk requested. The Fragments portion of the request is given using a time stamp (usually in hundred nanosecond intervals) that the IIS server uses to extract the correct chunk from the aggregate MP4 audio and/or video files.
http://sourcehost/local/2/mysticSmooth.isml/QualityLevels(350000)/Fragments(video=2489452460333) http://sourcehost/local/2/mysticSmooth.isml/QualityLevels(31466)/Fragments(audio-eng=2489450962444)
In the VoD case, the manifest files contain timing and sequence information for all the chunks in the content. The player uses this information to create the URL requests for the audio and video chunks.
VoD의 경우 매니페스트 파일에는 콘텐츠의 모든 청크에 대한 타이밍 및 시퀀스 정보가 들어 있습니다. 플레이어는이 정보를 사용하여 오디오 및 비디오 청크에 대한 URL 요청을 만듭니다.
Note that the use of IIS as the source of the manifest and fMP4 files doesn’t preclude use of standard HTTP servers in the CDN. The CDN can still cache and deliver the manifest and chunks as it would any other files. More information about MSS can be found at Microsoft (see [SSTO]) and various excellent blogs of the developers of the technology (see [SSBLOG]).
MSS Considerations
Some advantages of MSS are:
MSS 고려 사항
MSS의 장점은 다음과 같습니다.
IIS creates an aggregate format for the stream, so that a small number of files can hold all the information for the complete smooth stream.
The use of IIS brings useful analysis and logging tools, as well as the ability to deliver more MSS and HLS content directly from the IIS server.
The recommended use of a small chunk size allows for rapid adaptation during HTTP streaming playback.
The segregated video and audio files mean that delivery of different audio tracks involves just a manifest file change.
The aggregate file format supports multiple data tracks that can be used to store metadata about ad insertion, subtitling, etc.
IIS는 스트림에 대해 집계 형식을 생성하므로 소수의 파일이 완전한 원활한 스트림을위한 모든 정보를 저장할 수 있습니다.
IIS를 사용하면 유용한 분석 및 로깅 도구뿐만 아니라 IIS 서버에서 직접 더 많은 MSS 및 HLS 콘텐츠를 제공 할 수 있습니다.
작은 청크 크기의 권장 사용은 HTTP 스트리밍 재생 중에 신속한 적응을 가능하게합니다.
분리 된 비디오 및 오디오 파일은 매니페스트 파일 변경과 관련된 여러 오디오 트랙의 전달을 의미합니다.
집계 파일 형식은 광고 삽입, 자막 등에 대한 메타 데이터를 저장하는 데 사용할 수있는 여러 데이터 트랙을 지원합니다.
Some disadvantages of MSS are:
The need to place an IIS server in the data flow adds an extra point of failure and complicates the network.
On PCs, MSS requires installation of a separate Silverlight plug-in.
데이터 흐름에 IIS 서버를 배치해야 할 필요성이 추가되고 네트워크가 복잡해집니다.
PC에서 MSS는 별도의 Silverlight 플러그 인을 설치해야합니다.
Figure 7. A sample MSS manifest file. The elements with ‘t=”249…”’ specify the time stamps of chunks that the server already has and can deliver. These are converted to Fragment timestamps in the URL requesting an fMP4 chunk. The returned chunk holds time stamps of the next chunk or two (in its UUID box), so that the client can continue to fetch chunks without having to request a new manifest.
Adobe HTTP Dynamic Streaming (HDS)
Adobe HDS was defined after both HLS and MSS and makes use of elements in each (see [HDS]). In HDS, an XML manifest file (of file type f4m) contains information about the available profiles (see Figure 8 and [F4M]). As in HLS, data that allows the client to derive the URLs of the available chunks is repeatedly downloaded by the client; in HDS this is called the bootstrap information. The bootstrap information is in a binary format and hence isn’t human-readable. As in MSS, segments are encoded as fragmented MP4 files that contain both audio and video information in one file. HDS chunk requests have the form:
http://server_and_path/QualityModifierSeg’segment_number’–Frag’fragment_number’
where the segment and fragment number together define a specific chunk. As in MSS, an aggregate (f4f) file format is used to store all the chunks and extract them when a specific request is made.
Figure 8. A sample HDS manifest file.
HDS Considerations
Some advantages of HDS are:
HDS 고려 사항
HDS의 장점은 다음과 같습니다.
The Flash client is available on multiple devices and is installed on almost every PC in the world.
HDS is a part of Flash and can make use of Flash’s environment and readily available developer base.
플래시 클라이언트는 여러 장치에서 사용할 수 있으며 세계의 거의 모든 PC에 설치됩니다.
HDS는 Flash의 일부이며 Flash 환경과 쉽게 사용할 수있는 개발자 기반을 사용할 수 있습니다.
Some disadvantages of HSS are:
HDS is a relative late-comer and could suffer more from stability issues.
Adobe’s Flash access roadmap is rapidly changing, making it difficult to deploy a stable ecosystem.
Adobe holds close the details of its format. Combined with the binary format of the bootstrap file, this limits the ecosystem of partners offering compatible solutions.
HDS는 비교적 늦은 편이며 안정성 문제로 인해 더 많은 어려움을 겪을 수 있습니다.
Adobe의 Flash 액세스 로드맵이 빠르게 변화하고있어 안정적인 생태계를 구축하기가 어렵습니다.
Adobe는 형식에 대한 세부 정보를 제공합니다. 부트 스트랩 파일의 바이너리 형식과 결합하면 호환 가능한 솔루션을 제공하는 파트너의 생태계가 제한됩니다.
4. Feature Comparison
In this section, we compare HLS, HDS and MSS usability for a variety of common usage scenarios.
Delivery of Multiple Audio Channels
In HLS, TS files can easily carry multiple audio tracks, but this isn’t necessarily a strength when it comes to adaptive HTTP streaming because it means the TS chunks are bigger, even if only one audio stream is consumed. In places where there are multiple languages, multiple audio streams included in each chunk can consume a larger portion of the bandwidth than the video. For HLS to work well, packagers have to create chunks with each video/audio language combination, increasing the storage needed for holding the chunks.
MSS stores each audio track as a separate file, so it’s easy to create chunks with any video-audio combination. The total number of files is the same as with HLS, but the ability to store the audio and video once in aggregate format makes MSS more efficient.
HDS does not offer a compelling solution to delivering different audio streams.
Encryption and DRM
HLS supports encryption of each TS file. This means that all the data in the TS file is encrypted and there is no way to extract it without access to the decryption keys. All metadata related to the stream (e.g. the location of the decryption keys) must be included in the playlist file. This works well, except that HLS does not specify a mechanism for authenticating clients to receive the decryption keys. This is considered a deployment issue. A number of vendors offer HLS-type encryption, often with their own twist which makes the final deployment incompatible with other implementations.
MSS uses Microsoft’s PlayReady, which gives a complete framework for encrypting content, managing keys, and delivering them to clients. In PlayReady, only the payload of the fMP4 file is encrypted, so the chunk can carry other metadata. Microsoft makes PlayReady code available to multiple vendors that productize it, and so a number of vendors offer PlayReady capability (in a relatively undifferentiated way).
HDS uses Adobe’s Flash Access, which has an interesting twist that simplifies interaction between the key management server and the scrambler that does the encryption. Typically, keys must be exchanged between these two components, and this exchange interface is not standardized. Each DRM vendor/scrambler vendor pair must implement this pair-wise proprietary API. With Adobe Access, no key exchange is necessary – the decryption keys are sent along with the content, but are themselves encrypted. Access to those keys is granted at run time, but no interaction between the key management system and scrambler is needed. Adobe licenses its Access code to third parties, or it may be used as part of the Flash Media Server product suite.
Closed Captions / Subtitling
HLS can decode and display closed captions (using ATSC Digital Television Standard Part 4 – MPEG-2 Video System Characteristics - A/53, Part 4:2007, see [ATCA]) included in the TS chunks as of iOS 4.3 (the implementation in iOS 4.2 is more problematic). For DVB teletext, packagers need to convert the subtitle data into ATSC format or wait for clients to support teletext data.
MSS supports data tracks that hold Time Text Markup Language (TTML), a way to specify a separate data stream with subtitle, timing and placement information (see [TTML]). For MSS, packagers need to extract subtitle information from their input and convert it into a TTML track. Microsoft’s implementation of MSS client currently offers support for W3C TTML, but not for SMPTE TTML (see [SMPTE-TT]), which adds support for bitmapped subtitles, commonly used in Europe.
HDS supports data tracks that hold subtitles as DFXP file data, based on the TTML format. Clients can selectively download this data, similar to MSS, but client support requires customization and additional development.
Targeted Ad insertion
HLS is the simplest protocol to use for chunk-substitution-based ad insertion. With HLS, the playlist file can be modified to deliver different ad chunks to different clients (see Figure 9). The EXT-X-DISCONTINUITY tag can be used to tell the decoder to reset (e.g. because subsequent chunks may have different PID values), and only the sequence ID must be managed carefully, so that the IDs line up when returning to the main program. HDS also supports a repeated download of bootstrap data used to specify chunks, and this can be modified to create references to ad chunks – but because the bootstrap data format is binary, and the URLs are RESTful with references to chunk indexes, the process is complex.
MSS is trickier when it comes to chunk-based ad insertion for live streams. The fact that chunks contain timing information used to request the next chunk means that all ad chunks have to have identical timing to the main content chunks (or that some other method is used to reconcile the program time when returning to the program). Nevertheless, a proxy can be used to redirect RESTful URL chunk requests and serve different chunks to different clients.
Both MSS and HDS can deliver control events in separate data tracks. These can be used to trigger client behaviors using the Silverlight and Flash environments, including ad insertion behavior. This is beyond the scope of this paper, which is focused on ‘in stream’ insertion. One difference between MSS and HDS is that MSS defines a client-side ad insertion architecture, whereas HDS does not.
Figure 9. HLS ad insertion in which changes to the playlist file delivered to each client cause each client to make different URL requests for ads and thus receive targeted ad content.
그림 9. HLS 광고 삽입 : 각 클라이언트에 전달 된 재생 목록 파일을 변경하면 각 클라이언트가 광고에 대해 서로 다른 URL 요청을 작성하여 대상 광고 콘텐츠를 수신하게됩니다.
Trick Modes (Fast-forward / Rewind)
VoD trick modes, such as fast-forward or rewind, are messy in all the protocols. None of the protocols offer native support for fast-forward or rewind. Some MSS variations have defined zoetrope images that can be embedded in a separate track. These can be used to show still images from the video sequence and allow seeking into a specific location in the video.
HDS supports a fast playback mode, but this doesn’t appear to work well.
Custom VoD Playlists
It is convenient to be able to take content from multiple different assets and stitch them together to form one asset. This is readily done in HLS, where the playlist can contain URLs that reference chunks from different encodings and locations. In MSS and HDS, the RESTful URL name spaces and the references to chunks via time stamp or sequence number makes such playlists basically impossible to construct.
Fast Channel Change
Adaptive HTTP streaming can download low bitrate chunks initially, making channel ‘tune in’ times low. The duration of the chunk directly affects how fast the channel adapts to a higher bandwidth (and higher quality video). Because of this, MSS and HDS, which are tuned to work with smaller chunks, tend to work a bit better than HLS.
Failover Due to Upstream Issues
HLS manifests can list failover URLs in case content is not available. The mechanism used in the variant playlist file to specify different profiles can be used to specify failover servers, since the client (starting with iOS 3.1 and later) will attempt to change to the next profile when a profile chunk requests returns an HTTP 404 ‘file not found’ code. This is a convenient, distributed redundancy mode.
MSS utilizes a run-time client that is fully programmable. Similarly, HDS works within the Flash run-time environment. That means that the some failover capabilities can be built into the client. However, in both cases, there isn’t a built-in mechanism in the protocol to support a generic failover capability.
All the protocols will failover to a different profile if chunks/playlists in a given profile are not available. This potentially allows any of the protocols to be used in a “striping” scenario in which alternate profiles come from different encoders (as long as the encoders output IDR aligned streams), so that an encoder failure causes the client to adapt to a different, available profile.
Stream Latency
Adaptive HTTP clients buffer a number of segments. Typically one segment is currently playing, one is cached, and a third is being downloaded – so that the end-to-end latency is minimally about three segment durations long. With HLS recommended to run with 10-second chunks (though this isn’t necessary), this latency can be quite long.
Of the three protocols, only MSS has implemented a low latency mode in which sub-chunks are delivered to the client as soon as they are available. The client doesn’t need to wait for a whole chunk’s worth of stream to be available at the packager before requesting it, reducing its overall end-to-end latency.
Ability to Send Other Data to the Client (Including Manifest Compression)
HLS and HDS can send metadata to the client in their playlist and manifest files. MSS and HDS allow data tracks which can trigger client events and contain almost any kind of data. HLS allows a separate ID3 data track to be muxed into the TS chunks. This can be used to trigger client-side events.
MSS also allows manifest files to be compressed using gzip (as well as internal run-length-type compression constructs) for faster delivery.
5. Conclusion
5. 결론
A review of how these three formats compare is shown in the table below. In spite of MSS’s better performance, the technology that will gain the most market share remains to be seen. Ultimately, it may be DASH that succeeds in the market in the long run and not any of the technologies discussed here.
In any case, online and mobile viewing of premium video content is rapidly complementing the traditional TV experience, and delivery over the Internet requires new protocols to produce a high quality of experience based on device type and network congestion. Apple HLS, Microsoft Smooth Streaming and Adobe Flash HDS represent adaptive delivery protocols that enable high-quality video consumption experiences over the Internet. Content providers must now equip network delivery infrastructure with products capable of receiving standard video containers, slicing them into segments, and delivering those segments along with encryption where required. As with all new technology, the choice of delivery protocol will be made based on a combination of technical and business factors.
6. References
6. 참고 문헌
[HLS] HTTP Live Streaming, R. Pantos, http://tools.ietf.org/html/draft-pantos-http-live-streaming-06
[HLS1] HTTP Live Streaming, http://developer.apple.com/library/ios/#documentation/NetworkingInternet/Conceptual/HTTPLiveStreaming /_index.html
[MP4] International Organization for Standardization (2003). "MPEG-4 Part 14: MP4 file format; ISO/IEC 14496-14:2003"
[SSBLOG] http://blog.johndeutscher.com/, http://blogs.iis.net/samzhang/ , http://alexzambelli.com/blog/, http://blogs.iis.net/jboch/
[SSTO] Smooth Streaming Technical Overview, http://learn.iis.net/page.aspx/626/smooth-streaming-technical-overview/
[ATCA] ATSC Digital Television Standard Part 4 – MPEG-2 Video System Characteristics (A/53, Part 4:2007), http://www.atsc.org/cms/standards/a53/a_53-Part-4-2009.pdf
[TTML] Timed Text Markup Language, W3C Recommendation 18 November 2010, http://www.w3.org/TR/ttaf1-dfxp/
[MSS] IIS Smooth Streaming Transport Protocol, http://www.iis.net/community/files/media/smoothspecs/%5BMS-SMTH%5D.pdf
[F4M] Flash Media Manifest File Format Specification, http://osmf.org/dev/osmf/specpdfs/FlashMediaManifestFileFormatSpecification.pdf
[HDS] HTTP Dynamic Streaming on the Adobe Flash Platform, http://www.adobe.com/products/httpdynamicstreaming/pdfs/httpdynamicstreaming_wp_ue.pdf
[AHS] 3GPP TS 26.234: "Transparent end-to-end packet switched streaming service (PSS); Protocols and codecs".
[HAS] OIPF Release 2 Specification - HTTP Adaptive Streaming http://www.openiptvforum.org/docs/Release2/OIPF-T1-R2-Specification-Volume-2a-HTTP-Adaptive-Strea ming-V2_0-2010-09-07.pdf
[HLSID3] Timed Metadata for HTTP Live Streaming, http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/HTTP_Live_Streaming_Met adata_Spec/Introduction/Introduction.html
[DVB-BITMAPS] Digital Video Broadcasting (DVB); Subtitling systems, ETSI EN 300 743 V1.3.1 (2006-11), http://www.etsi.org/deliver/etsi_en/300700_300799/300743/01.03.01_60/en_300743v010301p.pdf
반응형
'# 03 > 프로토콜' 카테고리의 다른 글
스트리밍 방식 (0) | 2019.02.05 |
---|---|
WebRTC topology (0) | 2019.02.05 |
Comparing Adaptive HTTP Streaming Technologies-1 (0) | 2019.02.05 |
Automated Objective and Subjective Evaluation of HTTP Adaptive Streaming Systems (0) | 2019.02.05 |
MPEG DASH (0) | 2019.02.05 |