Worthless... stored for future


------------------------------------------------------------------------------------


Voluntary and involuntary eye movements 



Introduction


  In order to acquire, fixate, and track the visual stimuli, human eyes move both voluntarily and involuntarily. Most of the eye movements are involuntary reflexes except for saccades, vergence shifts, and smooth pursuits. 


 

Voluntary eye movements


- Saccades: Voluntary eye movements occur in small jumps called saccades. Horizontal and vertical saccades use different neuronal circuitry. Horizontal saccades are initiated by neurons in the frontal eye fields of the cerebral cortex. Activation of the right frontal eye field will cause the eyes to look to the left and activation of the left frontal eye field will cause the eyes to look to the right. Vertical saccades are activated by diffuse areas of the cortex. 


  In terms of eye movement types, we can classify them as follows

- voluntary motion

- tracking (both voluntary and involuntary) 

- convergence.

- pupillary reactions

- control of the lens

 

They are mostly reflexes rather than voluntary movement.

 

 

Voluntary motion

 

  Voluntary eye movements occur in saccades. Saccade movement indicates very fast jump from one eye position to another whereas smooth pursuit indicates slow and smooth eye movement. Saccades serve as fixation, rapid eye movement, and the fast-phase of optokinetic nystagmus[1]. We call small jumps as microsaccades and they occur even when the eye is still. 


  Voluntary horizontal and vertical gaze conjugate different neuronal circuitry. The voluntary horizontal gaze is initiated by neurons in the frontal eye field of the cerebral cortex. Activation of the right frontal eye field will cause the eyes to look to the left and activation of the left frontal eye field will cause the eye to look to the right. The voluntary vertical gaze follows a different pathway. First of all, there is no single cortical center responsible for the vertical gaze. Instead, diffuse areas of the cortex project to the rostral interstitial nucleus of the Medial Longitudinal Fasciculus (MLF). 

 

Tracking 


  Most of our normal voluntary eye movements are not smooth, but rather occur in saccades. However, we can move our eye smoothly when tracking a moving object. This smooth pursuit utilizes a part of the vestibulo-ocular reflex (VOR) pathways and require a visual input to the occipital cortex in order to lock the eyes onto the target. 


  The fixation reflexes and optokinetic reflexes use the same path as smooth pursuit movements. Fixation reflex refers to the ability to fixate eyes on a moving target. It compensates the VOR to stabilize the eyes when the head tracks the moving target. The optokinetic reflex (or optokinetic nystagmus; OKN) is an involuntary fixation on moving objects.  


Vestibulo-ocular reflex





 

Footnote

[1] Optokinetic nystagmus:  

Posted by Cat.IanKang
,

http://www.umiacs.umd.edu/~ramani/cmsc426/

Posted by Cat.IanKang
,

In order to solve this problem, I've tried so many things :(


I hope this post help you to save your time.



- Just downgrade your gdal version. 


In my case, every solutions I tried including altering the 32bit to 64bit and changing the python version, 

presented by anonymous user, have not been effective. 


The problem has been solved when I change the gdal version 2.1.0 to 2.0.3


good luck :)


(* Now, my python version is 3.4.4 and gdal version is 2.0.3.)







Posted by Cat.IanKang
,

The first step is to set up the environment. 


Following sites provide kind instruction what you have to do. 



http://cartometric.com/blog/2011/10/17/install-gdal-on-windows/


https://pythongisandstuff.wordpress.com/2011/07/07/installing-gdal-and-ogr-for-python-on-windows/


These sites are helpful



http://www.gisinternals.com/release.php 

* download site



------------------------------------------------------------

* Make sure the environment is established properly. 



The next step is to install the 'ptvs' which helps you to use python into the visual studio. (In my case, the version is  visual studio 2013)


Enter a keyword either 'python for visual studio' or 'ptvs'. 


Then you can find certain website to help you :)


Choose the correct version of ptvs and install it. 


Make new project and enter the following code.


"from osgeo import gdal"


If it works, you are ready to use gdal on visual studio. 


Unless, please check  followings.


- Does your project use proper python version that you set up the gdal environment?


- Does your ptvs version corresponds to your visual studio version?


- Does your python library DB updated? (if not you can do manually by clicking the "completion DB" button)


If you have any problem just leave some comments :)



------------------------------------------------------------------


I'm using gdal to convert IMG data into png data. 


below is code example of it.




import gdal

import numpy as np

import matplotlib.pyplot as plt

from osgeo import gdal

from matplotlib import cm



gdal.UseExceptions()




geo_n = gdal.Open(r'D:\ChromeDownload\USGS_NED_one_meter_x24y459_IL_12_County_HenryCO_2009_IMG_2015\USGS_NED_one_meter_x24y459_IL_12_County_HenryCO_2009_IMG_2015.img')

geo_s = gdal.Open(r'D:\ChromeDownload\USGS_NED_one_meter_x24y460_IL_12_County_HenryCO_2009_IMG_2015\USGS_NED_one_meter_x24y460_IL_12_County_HenryCO_2009_IMG_2015.img')

drv = geo_n.GetDriver()

print(drv.GetMetadataItem('DMD_LONGNAME'))

north = geo_n.ReadAsArray()

south = geo_s.ReadAsArray()


#topo = np.vstack((north,south))

topo = np.vstack(south)

i,j = np.where(topo>0)

topo = topo[min(i):max(i)+1, min(j):max(j)+1]

topo[topo==0] = np.nan

print(topo.shape)


fig = plt.figure(frameon = False)

plt.imshow(topo, cmap=cm.BrBG_r)

plt.axis('off')

cbar = plt.colorbar(shrink=0.75)

cbar.set_label('meters')

plt.savefig('kauai.png', dpi=300, bbox_inches='tight')

plt.show()




Posted by Cat.IanKang
,

http://sci-hub.io/


When you enter a kind of paper information 'eg. DOI information', you can access the paper for free.


It's really awesome '-'


(* it is on suing in U.S. I heard)

Posted by Cat.IanKang
,

ref: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476485(v=vs.85).aspx

ref2: https://msdn.microsoft.com/en-us/library/windows/desktop/dn508285(v=vs.85).aspx



In my case, I'd forgot to call 'unmap' method before called 'draw' method.


ID3D11DeviceContext::Unmap method: 

Invalidate the pointer to a resource and reenable the GPU's access to that resource.

Posted by Cat.IanKang
,

Since the render target view (that views the texture to render into) and depth stencil view (the corresponding depth buffer) have different multi-sampling settings, above error message have been shown. 


When using multi-sampling, every pixel needs to store extra data for the sub-samples. Each texture resource is prepared for one certain multi-sampling setting and to make them work together, both the color texture and the depth buffer (after all, it’s just another texture) need the same setting. You can have resources with different settings, but you can only bind them together, if their settings coincide.



Example code) 



HRESULT hr = S_OK;

// Setup the render target texture description.

D3D11_TEXTURE2D_DESC rtTextureDesc;

rtTextureDesc.Width = 1280;

rtTextureDesc.Height = 960;

rtTextureDesc.MipLevels = 1;

rtTextureDesc.ArraySize = 1;

rtTextureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;

rtTextureDesc.SampleDesc.Count = 1;

rtTextureDesc.SampleDesc.Quality = 0;

rtTextureDesc.Usage = D3D11_USAGE_DEFAULT;

rtTextureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;

rtTextureDesc.CPUAccessFlags = 0;

rtTextureDesc.MiscFlags = 0;


// Create the render target texture.

V_RETURN(pd3dDevice->CreateTexture2D(&rtTextureDesc, NULL, &m_renderTargetTexture));


// Setup the description of the render target view.

D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;

renderTargetViewDesc.Format = rtTextureDesc.Format;

renderTargetViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;

renderTargetViewDesc.Texture2D.MipSlice = 0;


// Create the render target view.

V_RETURN(pd3dDevice->CreateRenderTargetView(m_renderTargetTexture, &renderTargetViewDesc, &m_renderTargetView));


//Setup the depth stencil texture description. 

D3D11_TEXTURE2D_DESC dsTextureDesc;

dsTextureDesc.Width = 1280;

dsTextureDesc.Height = 960;

dsTextureDesc.MipLevels = 1;

dsTextureDesc.ArraySize = 1;

dsTextureDesc.SampleDesc.Count = 1;

dsTextureDesc.SampleDesc.Quality = 0;

dsTextureDesc.Format = DXGI_FORMAT_D32_FLOAT;

dsTextureDesc.Usage = D3D11_USAGE_DEFAULT;

dsTextureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;

dsTextureDesc.CPUAccessFlags = 0;

dsTextureDesc.MiscFlags = 0;

V_RETURN(pd3dDevice->CreateTexture2D(&dsTextureDesc, NULL, &m_depthTexture));


// Create the depth stencil view

D3D11_DEPTH_STENCIL_VIEW_DESC DescDS;

DescDS.Format = dsTextureDesc.Format;

DescDS.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;

DescDS.Texture2D.MipSlice = 0;

DescDS.Flags = NULL;

V_RETURN(pd3dDevice->CreateDepthStencilView(m_depthTexture, &DescDS, &m_depthView));


pd3dImmediateContext->OMSetRenderTargets(1, &m_renderTargetView, m_depthView);

return hr;

Posted by Cat.IanKang
,

URL: http://gameprogrammingpatterns.com/flyweight.html#forest-for-the-trees





[게임 프로그래밍 패턴] 1.1 디자인패턴을 다시보자 - Flyweight 패턴


안개가 걷히고, 웅장한 성장림이 그 모습을 드러내었다. 머리위로 드높이 솟아오른 수많은 고대의 솔송나무가 마치 푸른 대성당과 같다. 나뭇잎으로 이루어진 스테인드 글라스 덮개들은 햇살을 조각내어 안개속에 금빛을 흩뿌린다. 거대한 나무 사이로 뻗어있는 거대한 나무를 볼 수 있을 것이다. 



이것이 게임 개발자로써 꿈꾸는 초현실적인 설정이고, 이러한 씬은 보통 더는 겸손할 수 없는 이름을 가진 pattern인 Flyweight에 의해 가능해진다. 




숲을 위한 나무 (Forest for the trees)


울창한 숲을 묘사하는 것은 단지 몇개의 문장이면 충분하지만, 이것을 실시간 게임으로 구현하는 것은 전혀 다른 얘기다. 화면을 채우는 수많은 각 나무들로 모든 숲을 보는 것은 GPU에서 매초에 60번씩 수백만개의 폴리곤을 그리는 것이다. 수천개의 나무를 보면, 각각은 수천개의 폴리곤을 포함하는 굉장히 자세한 geometry 이다. 이를 렌더링 하기 위해서는 메모리문제 뿐 아니라, CPU와 GPU 사이의 bus를 이들 데이터가 모두 지나쳐야 하는 문제가 존재한다. 

Posted by Cat.IanKang
,