'Cat.Storage/SubCat.Research'에 해당되는 글 24건

  1. 2016.08.16 Two-alternative forced choice
  2. 2016.06.04 Depp Learning Surveys
  3. 2016.02.16 Nearest Neighbor Search
  4. 2015.10.27 ISO preference curves 1
  5. 2015.10.22 DCT versus DFT/KLT
  6. 2015.09.02 Expressions for Paper
  7. 2015.08.15 Feature vector in image processing
  8. 2015.08.10 Real-time infinite texturing from example - writing reference

Two-alternative forced choice


  The two-alternative forced choice (2AFC) method is a ubiquitous choice for measuring detection or discrimination thresholds. During one trial of a 2AFC experiment, a subject is asked to make a decision about two stimuli with regard to a particular stimulus parameter of interest (it is also called primary parameter). For example, a subject is instructed to indicate which one of two terrain images shows an more realistic scenes. See following figures.



Figure 1. Two terrain images. Left terrain data is 1/64 down-sampled version of right terrain data. 


 Over repeated trials with stimulus pairs of varying difference in sample rates, a psychometric function can be calculated that reflects the empirical probability of the subject's choice as a function of stimulus difference. The slope of this sigmoidal function is a direct measure of the subject's discrimination threshold. There are many benefits in using the 2AFC method. It requires subjects to perform a simple decision task. It provides a threshold measure in physical units unlike scaling methods. Also, by restricting a subject’s response to a binary decision, it avoids any contamination of the measured perceptual thresholds with motor noise unlike methods of adjustment. Furthermore, it generally provides a large number of data points, thus allowing a statistically sound analysis and robust fits of the data. Finally, with signal detection theory (SDT), there exists a well-accepted and simple observer model framework that links the psychometric functions to an internal sensory representation of the stimulus parameter of interest. For all these reasons, the 2AFC method has been popular in many areas. 


  It is possible to introduce biases in decision making in the 2AFC task. For example, if one stimulus occurs more frequently than the other, then the frequency of exposure to the stimuli may influence the participant's beliefs about the probability of the occurrence of the alternatives. 

'Cat.Storage > SubCat.Research' 카테고리의 다른 글

DirectX Tessellation Strategy  (0) 2016.09.02
Likert Scale  (0) 2016.08.16
Depp Learning Surveys  (0) 2016.06.04
Nearest Neighbor Search  (0) 2016.02.16
ISO preference curves  (1) 2015.10.27
Posted by Cat.IanKang
,

https://github.com/terryum/awesome-deep-learning-papers



'Cat.Storage > SubCat.Research' 카테고리의 다른 글

Likert Scale  (0) 2016.08.16
Two-alternative forced choice  (0) 2016.08.16
Nearest Neighbor Search  (0) 2016.02.16
ISO preference curves  (1) 2015.10.27
DCT versus DFT/KLT  (0) 2015.10.22
Posted by Cat.IanKang
,

1. Introduction 


 Nearest neighbor search (NNS), also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest (or most similar) points. Closeness (similarity) is typically expressed in terms of a dissimilarity function: the less similar the target, the larger the function values. The nearest-neighbor (NN) search problem is formally defined as follows: given a set S of points in a space M and a query point q∈M, find the closest point in S to qMost commonly M is a metric space and dissimilarity is expressed as a distance metric, which is symmetric and satisfies the triangle inequality. Even more common, M is taken to be the d-dimensional vector space where dissimilarity is measured using the Euclidean distance, Manhattan distance or other distance metric. However, the dissimilarity function can be arbitrary. 



2. Methods


  Various solution to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as the curse of dimensionality states that there is no general-purpose solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time. 


  1) Linear search


   Computing the distance from the query point to every other point in the database is the simplest solution to the NNS problem. We sometimes call it as the naive approach, has a running time of O(dN) where N is the cardinality of S and d is the dimensionality of M. Since there are no search data structures to maintain, the linear search has no space complexity beyond the storage of the database. 


   2) 

'Cat.Storage > SubCat.Research' 카테고리의 다른 글

Two-alternative forced choice  (0) 2016.08.16
Depp Learning Surveys  (0) 2016.06.04
ISO preference curves  (1) 2015.10.27
DCT versus DFT/KLT  (0) 2015.10.22
Expressions for Paper  (0) 2015.09.02
Posted by Cat.IanKang
,

How do varying N (resolution of image) and k (number of gray level in image) affect images? An early study by Huang [1965] attempted to quantify experimentally the effects on image quality produced by varying N and k simultaneously. The experiment uses images that is similar to those shown in below figure. The woman’s face has relatively little detail; the picture of the cameraman contains an intermediate amount of detail; and the crowd picture contains, by comparison, a large amount of detail. 


(a) Image with a low detail. (b) Image with a intermediate amount of detail. (c) Image with a large amount of detail.


Sets of these three types of images were generated by varying N and k, and participants were then asked to rank them according to their subjective quality. Results were plotted in the form of so-called iso-preference curves in the Nk-palne (see below graph). 


Representative iso-preference curves for the three types of images in the above figure.


Each point in the Nk-plane represents an image having values of N and k equal to the coordinates of that point. Points lying on an iso-preference curve correspond to images of equal subjective quality. It was found in the course of the experiments that the iso-preference curves tended to shift right and upward, but their shapes in each of the three image categories were similar to those shown in above graph. This is not unexpected, since a shift up and right in the curves simply means larger values for N and k, which implies better picture quality.  The key point of interest in the context of the present discussion is that iso-preference curves tend to become more vertical as the detail in the image increases. This result suggests that for images with a large amount of detail only a few gray levels may be needed. For example, the iso-preference curve of crowd image is nearly vertical. This indicates that, for a fixed value of N, the perceived quality for this type of image is nearly independent of the number of gray levels used. It is also of interest to note that perceived quality in the other two image categories remained the same in some intervals in which the spatial resolution was increased, but the number of gray levels actually decreased. The most likely reason for this result is that a decrease in k tends to increase the apparent contrast of an image, a visual effect that humans often perceive as improved quality in an image.


'Cat.Storage > SubCat.Research' 카테고리의 다른 글

Depp Learning Surveys  (0) 2016.06.04
Nearest Neighbor Search  (0) 2016.02.16
DCT versus DFT/KLT  (0) 2015.10.22
Expressions for Paper  (0) 2015.09.02
Feature vector in image processing  (0) 2015.08.15
Posted by Cat.IanKang
,

The Discrete Cosine Transform (DCT): Theory and Application p. 16


DCT = Discrete Cosine Transform

DFT = Discrete Fourier Transform

KLT = Kamunen-Loeve Transform


 At this point it is important to mention the superiority of DCT over other image transforms. More specifically, we compare DCT with two linear transforms: 1) The Karhunen-Loeve Transform (KLT); 2) Discrete Fourier Transform (DFT).


  The KLT is a linear transform where the basis functions are taken from the statistical properties of the image data, and can thus be adaptive. It is optimal in the sense of energy compaction, i.e., it places as much energy as possible in as few coefficients as possible. However, the KLT transformation kernel is generally not separable, and thus the full matrix multiplication must be performed. In other words, KLT is data dependent and, therefore, without a fast (FFT-like) precomputation transform. Derivation of the respective basis for each image sub-block requires unreasonable computational resources. Although, some fast KLT algorithms have been suggested, nevertheless the overall complexity of KLT is significantly higher than the respective DCT and DFT algorithms. 


  In accordance with the readers’ background, familiarity with Discrete Fourier Transform (DFT) has been assumed throughout this document. The DFT transformtion kernel is linear, separable and symmetric. Hence, like DCT, it has fixed basis images and fast implementations are possible. It also exhibits good decorrelation and energy compaction characterictics. However, the DFT is a complex transform and therefore stipulates that both image magnitude and phase information be encoded. In addition, studies have shown that DCT provides better energy compaction than DFT for most natural images. Furthermore, the implicit periodicity of DFT gives rise to boundary discontinuties that result in significant high-frequency content. After quantization, Gibbs Phenomeneon causes the boundary points to take on erraneous values.

'Cat.Storage > SubCat.Research' 카테고리의 다른 글

Nearest Neighbor Search  (0) 2016.02.16
ISO preference curves  (1) 2015.10.27
Expressions for Paper  (0) 2015.09.02
Feature vector in image processing  (0) 2015.08.15
Real-time infinite texturing from example - writing reference  (0) 2015.08.10
Posted by Cat.IanKang
,

1) While texture synthesis has been well-studied in recent years, real-time techniques remain elusive. 


2) Previous texture synthesis methods spend the majority of their time comparing pixel neighborhoods, and thus several attempts have been made to reduce the number of comparisons required. 


3) Texture appearance ranges from structured regular textures, such as a brick wall, to structured irregular textures, such as a stone wall, to stochastic textures, such as roughcast or grass. 


4)Given a sample texture, the goal is to synthesize a new texture that looks like the input. The synthesized texture is tilable and can be of arbitrary size specified by the user. 

'Cat.Storage > SubCat.Research' 카테고리의 다른 글

ISO preference curves  (1) 2015.10.27
DCT versus DFT/KLT  (0) 2015.10.22
Feature vector in image processing  (0) 2015.08.15
Real-time infinite texturing from example - writing reference  (0) 2015.08.10
Reference Category  (0) 2015.08.10
Posted by Cat.IanKang
,

It's simply a list of all the measurements you make on an image. I can be structured what you want. Let's say you measure your image and find out mean = 20, standard deviation= 0.2. Then you could make a feature vector


 Feature_vector = {20, 0.2}


Then you can do whatever you want with that, for example measuring a texture quality, or similarity with other textures. 

'Cat.Storage > SubCat.Research' 카테고리의 다른 글

DCT versus DFT/KLT  (0) 2015.10.22
Expressions for Paper  (0) 2015.09.02
Real-time infinite texturing from example - writing reference  (0) 2015.08.10
Reference Category  (0) 2015.08.10
getIFFT  (0) 2015.07.20
Posted by Cat.IanKang
,

On-the-fly Multi-scale infinite texturing from example


Providing efficient solutions for rendering detailed realistic environments in real-time applications, like games or flight/driving simulators, has always been a major focus in computer graphics. Details can be efficiently rendered using textures. But despite improvements of graphics hardware, memory capacity and data streaming techniques, which allowed over the recent years for increased scene complexity, texturing techniques must still fulfill constraints which are difficult to unify in a single approach. Ideally, they should 


1) be as fast as possible to avoid penalizing frame rates,

2) use compact texture maps to limit streaming and data transfers that also penalize frame rates, 

3) be non-periodic to avoid visual repetition artifacts, 

4) produce fine details to avoid undersampling artifacts, 

5) be enhanced with relief when details represent geometry like cracks or bumps to improve the rendering quality by accounting for parallax effects.



'Cat.Storage > SubCat.Research' 카테고리의 다른 글

Expressions for Paper  (0) 2015.09.02
Feature vector in image processing  (0) 2015.08.15
Reference Category  (0) 2015.08.10
getIFFT  (0) 2015.07.20
Fourier transform pairs  (0) 2015.07.18
Posted by Cat.IanKang
,