While the H-coefficient this indicates how strongly the items in each scale scale together in a Guttman-like sense this is a simplification isn't printed in your screen, if you type Code:. Please use the command -dataex- to show a representative sample of data; it is installed already if you have Stata Comment Post Cancel. Many thanks Weiwen for your answer and apologies for not posting my output in the correct manner. I have used the method you suggest, but exactly the same comes out after "return list" - ie output with the H coefficients left blank. Any other suggestions?
And here is the output hopefully in the correct fashion: Code:. Huh, I didn't see that coming! I would take this up with the author of the command. I don't dispute that the scale H coefficients are missing, but the manual seems to be pretty clear that they should be returned in those scalars. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from kbps to kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Scalable video coding has gained renewed interest since it has been shown [1, 2] that it can achieve compression efficiency that is close to that of H. Scalable video. They employ motion-compensated temporal filtering flexible temporal predictions, such as hierarchical B pictures in block-based scalable coders and open-loop MCTF in wavelet coders to provide temporal scalability, followed by a spatial transform wavelet or block transform as shown in Figure 1.
Spatial scalability can be provided by compression of low resolution with prediction among layers in block-based coders, where wavelet transform inherently provides spatial scalabil-. All transform coefficients can then be encoded using an embedded entropy coder to obtain SNR scalability. Alternatively, SNR scalability can be achieved by requantization.
The scalable video compression standard, SVC , is based on block-based scalable coding methods. However, the problem analyzed in this paper is common to all scalable video coding methods and the proposed solution is applicable to any scalable video coder including SVC. A survey of recent developments in scalable video coding can be found in[ 1] and further details on the scalable video coding standardization can be found in .
Rate reduction by discarding enhancement layers of different scalability types generally results in different types of visual distortion on the decoded video depending on the rate and content . Hence, in many cases, the scalability type should be adapted to content type of different temporal segments of the video for the best visual results.
There are only a limited number of works that investigate the dependency between scalability type, video content, and rate, and that present objective methods for scalability-type selection. A metric for the optimal ratio of spatial and temporal information has been defined in  and compared with a threshold to select between the spatial and temporal operators. Optimal tradeoff between SNR and temporal scalability is addressed in  using some content-based features, where a machine learning algorithm has been employed to match content features with the preferred scaling option.
A similar approach is followed in  where content-based features have been used to select one of MPEG-4 FGS modes based on an objective distortion metric defined in . In , optimal rate adaptation is studied by varying spatial resolution, frame rate, and quantization step size using integer programming. In , optimum frame rate and quantization parameter selection to minimize the mean square error MSE are presented with rate-distortion modeling and frame skip.
In , the MSE distortion is used for rate-distortion modeling of multidimensional transcoding. It is well known that visual distortions cannot always be measured meaningfully in terms of MSE . An example confirming this observation is shown in Figure 2, where discarding SNR enhancement layer s results in lower MSE higher PSNR value, but is visually inferior to discarding spatial enhancement layer s at the same base layer bitrate. Hence, although MSE may be a good measure of distortions caused by SNR scaling, visual distortions due to spatial and temporal scalings spatial-and-temporal-frequency-sensitivity related distortions cannot be measured accurately with the MSE .
Objective measures canbe grouped as i those based on a model of low-level visual processing in the retina and ii those which quantify compression artifacts . An early example of the latter type is , where visual distortion for MPEG-2 coded videos is measured considering blockiness and a perceptual model. In , subjective evaluation of videos coded with several coders, including scalable coders, is investigated and significant correlation is found with distortion-based objective metrics.
We review examples of latter-type metrics in Section 2. In this work, we study the relationship between scalability type, content type, and bitrate based on the assumption that. We define an objective function based on specific visual distortion measures, whose weights are tuned to different shot content types at a given bitrate in order to choose the best scalability type for each temporal segment. The weights of the objective function vary according to the shot content type, since the dominant distortion may depend on the content e.
There is a significant amount of work reported on automatic video analysis , which is beyond the scope of this paper. Recently, specific content analysis methods have been developed for sports video . Most of these methods can be implemented in real time or near real time. Content-aware video coding and streaming techniques have been proposed in , where different shots have been assigned different coding parameters depending on the content and user preferences.
Previous works, such as , are experimental, which can determine the optimal operator but not the cost associated with choosing another operator. Hence, they cannot be used in an optimization framework such as rate-distortion optimization or rate-distortion-complexity adaptation. Other works consider only limited scalability options, for example,  considers only SNR and temporal scaling, but not spatial scaling. A block diagram of the proposed system is shown in Figure 3, where a fully embedded scalable video coder is employed. Bitstreams formed according to different combinations of scalability options are then extracted and decoded.
Low-resolution videos are interpolated to the original resolution. Finally, the above objective cost function is evaluated for each combination, and the option that results in the minimum cost function is selected. The paper is organized as follows. We discuss distortion measures in Section 2. Section 3 presents the choice of scaling options SNR, temporal, spatial, and their combinations and the problem formulation.
Two subjective tests and statistical analyses of the results are described in Section 4. Conclusions are presented in Section 5. It is well known that different scalability options yield different types of distortions . For example, at low rates,. SNR scalability results in blockiness and flatness due to block motion compensation see Figure 4 and high quantization parameter Figure 2 a.
On the other hand, spatial scalability results in blurriness due to spatial lowpass filtering in 2D wavelet coding Figure 2 b , and temporal scalability results in motion jerkiness. Because the PSNR is inadequate to capture all these distortions or distinguish between them , we need to employ visual quality measures .
It is not the objective of this research to develop new video quality metrics or verify them. We only employ such available metrics to develop a measure for scalability-type selection; the general framework is applicable with any choice of distortion functions as long as training is performed with the same set offunctions. The following recently published measures with small modifications due to the features of the codec have been used in this work, although the proposed framework does not rely on any specific mea-.
Blurriness is defined in terms of change in the edge width . Major vertical and horizontal edges are found by using the Canny operator , and the width of these edges is computed. The blurriness metric is then given by. Edges in the still regions of frames are taken into consideration as done in . A new objective measure for flatness-based on local variance of relatively smooth regions regions where there are no. First, major edges using the Canny edge operator  are found, and the local variance of 4 X 4 blocks that contain no significant edges is computed.
The flatness measure is then defined as.
MSP output does not show scalability coefficient
The blockiness of the ith horizontal edge can be defined as. Blockiness measure for vertical straight edges BMvert can be defined similarly. Finally, total blockiness metric Dblock is defined as. The hard-limiting operation provides spatial masking of quantization noise in high texture areas. Several blockiness measures exist to assist PSNR in the evaluation of compression artifacts under the assumption that the block boundaries are known a priori [15, 16, 26].
- MSP output does not show scalability coefficient!
- Life is Good, All is Well - Everything is Vibration.
- Son of Grace.
- THE BEAST WITHIN;
- The Grunts All At Sea;
For example, the blockiness metric proposed in  is defined as the sum of the differences along predefined edges scaled by the texture near that area. To this effect, first the locations of the blockiness artifacts should be found. Horizontal and vertical edges detected in the decoded frame, which do not exist in the original frame, are treated as blockiness artifacts. Canny edge operator  is used to find such edges. Any edge pixels that do not form vertical or horizontal lines are eliminated.
Alternatively, block locations can be determined after decoding the bitstream. A measure of texture near the edge location, which is included to consider spatial. In order to evaluate the difference between temporal jerkiness of the decoded and original videos with full frame rate, we compute the sum of magnitudes of differences of motion vectors over all 16 X 16 blocks at each frame without considering the replicated interpolated frames ,. Specifically, we perform motion estimation on the original video and denote the motion vectors as MVorg i for block i.
We then calculate the MV on the distorted video temporally sampled frames if temporal scaling is used and estimate the MV for the frame of interest i. Then, the distortion between the original and decoded videos depends on the choice of the interpolation filter. For spatial interpolation, we use the 7-tap synthesis filter, which is reported as the best interpolating filter for signals downsampled using the 9-tap Daubechies wavelet .
We verified that this inverse wavelet filter performed, on the average, 0.
MSP output does not show scalability coefficient - Statalist
Temporal interpolation should ideally be performed by MC filters . However, when the low frame rate video suffers from compression artifacts such as flatness and blockiness, MC filtering is not successful. On the other hand, simple temporal filtering, without MC, results in ghost artifacts. Hence, we employ a zero-order hold frame replication for temporal interpolation, which results in temporal jerkiness distortion. In this section, we first present a list of scalability options for each video segment, assuming that the input video is parsed divided into temporal segments and each segment is classified into one of K classes according to content type using a content analysis algorithm.
Shot boundary determination and shot-type classification, which are beyond the scope of this paper, can be done automatically for certain content domains using existing techniques, for example, for soccer videos . Next, we formulate the problem of selecting the best scalability option for each temporal video segment according to its content type among the list of available scalability options, such that the optimal option yields minimum total distortion, which is quantified as a function of the individual distortion measures presented in Section 2.
Finally, the training procedure for determination of the coefficients of the linear combination, which quantify the total distortion, as a function of the content type of the video segment is presented. There are three basic scalability options: temporal, spatial, and SNR scalabilities. Temporal scalability can be achieved by skipping high frequency frames and their motion vectors following MCTF. Jerkiness may be observed at the low frame rate. Spatial scaling introduces blur due to interpolation back to original size for display and ringing.
We observe that spatially scaled videos have lower PSNR after interpolating back to original size than their visual quality suggests see Figure 2. SNR scalability is provided by the embedded entropy coding of subbands after temporal and spatial decompositions. We also consider combinations of scalability types to allow for hybrid scalability modes.
In this work, we allow six combinations of scaling operators, shown in Table 1, that constitute a reasonable subset of scalability options for the target bitrates kbps , where the original resolution has been CIF fps. Most existing methods for adaptation of the video coding rate are based on adaptation of the SNR quantization parameter only, because i it is not straightforward to employ the conventional rate-distortion framework for adaptation of temporal, spatial, and SNR resolutions simultaneously, which requires multidimensional optimization; ii PSNR is.
Considering the above limitations, we propose a quantitative method to select the best scalability option for each temporal segment by minimizing a visual distortion measure or cost function. In , a distortion metric which is a linear combination of distinct distortion metrics such as edge-ness and temporal decorrelation has been proposed. Following a similar approach, we define an objective function ofthe form.
A procedure for determination of the coefficients of the cost function according to content type is presented in the following section. The weights depend on the content type because different distortions appear to be dominant for different content types. This procedure is summarized in Table 2.
The basic idea is to select the coefficients such that the objective measure 7 is in agreement with the results of the Subjective Test-I as closely as possible. Due to nonlinear transformations produced by the Gaussian kernel, cross-correntropy has no guarantee of zero mean, even when the input data are centered at zero. The definition of centered cross-correntropy overcomes this limitation [ 12 ]. We can notice that the centering term in Equation 10 is numerically equal to the estimator of the cross-information potential in Equation 5.
Thus, from the Equation 6 , centered cross-correntropy reduces to zero if X and Y are independent random variables. In applications involving signals with unknown amplitudes, conducting some type of normalization can be required. To avoid this process, Xu et al. It can be verified that its value reduces to zero if the two random variables X and Y are statistically independent, and its absolute value is close to one as they become statistically related.
In this paper, we propose a parallel algorithm for calculating the correntropy coefficient. We analyze its parallel efficiency and scalability toward very large input dimensions. For each image size, we constructed two random images and measured the similarity using the proposed parallel correntropy coefficient.
Negative Scalability Coefficients in Excel
We choose random images over other types of images because we are not actually interested in the similarity measures themselves but rather in the computational performance of these measures. In parallel computing, speedup is defined as the ratio between the sequential execution time T s and the parallel execution time T p , given by. A linear or ideal speedup is obtained for example when by doubling the number of processors also doubles the speed of processing [ 23 ]. Thus, the goal for parallel algorithms is trying to achieve linear speedup.
However, according to Amdahl's law [ 24 ], the parallel speedup is limited by the sequential fraction of the execution time. The sequential fraction F s of an algorithm executing in a given architecture with p processors can be estimated by the following expression. Parallel efficiency is a value, typically between zero and one, that expresses the percentage of the speedup achieved by the algorithm compared to the linear speedup.
It represents the percentage of processing power that is actually being used to perform the calculation.
The complement of the efficiency indicates how much effort is being wasted due to parallelization overhead. Efficiency E is expressed by. When analyzing the scalability of a parallel algorithm, it is important to observe the values of the sequential fraction and the parallel efficiency when the number of processor and the problem size scales. Generally, we would expect that the sequential fraction do not scale together with these numbers. As for the parallel efficiency, since its value is inversely proportional to p , commonly, we observe a decreasing value with an increase in the number of processing elements.
For this reason, the scalability of a parallel algorithm is generally associated, especially for larger problems, with the scaling of the problem size [ 25 ]. In this work, we study the speedup, the efficiency, and the scalability of a parallel algorithm for the calculation of correntropy. Also, a study of the sequential fraction of the algorithm is performed in order to provide insights on how much time is spent with communication, synchronization, and load balancing.
The correntropy coefficient was implemented in parallel using the OpenMP framework [ 26 ]. In order to facilitate the parallelization, we organized the sequence of operations for the calculation of the correntropy coefficient. Substituting the definitions of the cross correntropy Equation 8 and the cross information potential Equation 5 into the correntropy coefficient Equation 12 , the following expression is obtained.
Thus, the correntropy coefficient can be obtained by.
Figure 1 illustrates the pseudo-code of the parallelized calculation of this metric. The first step of the algorithm is responsible for loading the characteristics of the images and is not considered for execution time measurements. In the second step, the data are divided among the threads for computation.
Note that, only two loops are used to compute all summations. In the last step of the algorithm, the main thread is responsible to calculate the final value of the correntropy coefficient. Pseudo-code of the parallel calculation of the correntropy coefficient implemented in OpenMP. The serial algorithm for the calculation of the correntropy coefficient has an arithmetic complexity of O n 2. In the proposed method, each one of the p processors runs the same program but with distinct data, yielding an arithmetic complexity of the correntropy coefficient to O n 2 p. Another important performance measure is the space complexity defined as memory required by the algorithm.
The experiments were performed on a system with two AMD Opteron , each with 12 cores running at 2. Each value of execution time used in the analysis was obtained through an arithmetic average over five measurements for each image size. Generally, every parallel algorithm has a fraction of code that needs to be executed sequentially. The complement of this fraction can be concurrently executed.
The increase in processing speed obtained with the parallelization of the algorithm is limited by this sequential fraction. The results are shown in Figure 2. Analysis of the parallel and serial fraction of the proposed parallel algorithm. From the results shown in Figure 2 , we observed that the sequential fraction of the algorithm is always much smaller than the parallel fraction.
It increases from two to four processors but decreases asymptotically for higher numbers of processors. The decreasing of the parallel fraction of the proposed algorithm with the growing number of processors indicates a potential for parallelization. With the objective to evaluate the speedup of the proposed parallelization for the calculation of the correntropy coefficient for each image set, we conducted simulations for a varying number of processing elements.
The obtained results are presented in Figure 3.