Commit 8b59dffc authored by Apoorv Goel's avatar Apoorv Goel Committed by Alexander Alekhin

Merge pull request #2161 from UnderscoreAsterisk:dynafu

DynamicFusion Implementation

* Add new nodes from points

* Addition of new node in the field

* Warp nodes demo

* Add newline before {

* Remove 3rd party flann dependency

* Regularisation Heirarchy

* Correct node radius

* Change default growth rate

* New node position = centroid of the neighbourhood

* Enlarge nodes while paused

* Dynafu TSDF

* TSDF demo

* Avoid double calc and adjust initial tsdf weight

* Fix bug that caused some voxels to disappear

* getNodePos helper

* Remove USE_INTRINSIC check

* Correct RT avg calculation and remove redundant knn calc

* Slight perf improvements

* Use LinearIndex

* Debug calculations

* set all nodes' RT

* Various bug fixes

* Separate camera and warpfield

* Add dynafu documentation

* Adhere to coding style

* Add dynafu unit test

* update demo

* Merge pull request #2133 from savuor:kinfu_truncate

KinectFusion: several updates (#2133)

* truncateThreshold added

* demo updated for RealSense

* first Kinect2, then RealSense

* more distance-based parameters

* Remove trailing whitespaces

* Replace bool vector with array

* create findNeighbours in WarpField

* Maintain nodesPos matrix in WarpField

* Fix warnings on Windows build

* Remove cameraPose from WarpField

* Use AutoBuffer

* Marching Cubes

* Fix MC

* Split mesh vertices & edges

* Change Mat types in MC

* OpenGL rendering

* Check for HAVE_OPENGL

* Error handling in case HAVE_OPENGL is not defined

* Replace Mat_ with std::vector inside marchCubes

* Parallelise marching cubes

* Fix warpfield and estimate depth inside DynaFuImpl::updateT()

* Linearise depth and use points/normals from MC

* Don't test dynafu without OpenGL support

* Analytical calculation of Jacobian matrices

* Add details about rotation and differentiate graph terms in J_r

* Use derivative of T^-1 from the tutorial

* Remove L2 norm from reg term

* Use chain rule to differentiate data term

* Markdown

* Fix markdown

* Replace MD file by HTML

* Change the data term expression

* Calculate J_d using vector identities

* Rasterize vertex and normals

* Apply warpfield before rendering

* Add CV_UNUSED for normImage to fix warning

* Render float image instead of uint8

* Implement ICP data term and other changes:
1. add option to applyWarp to normals
2. add option to `fetchPointNormals` to return points in voxel coordinates
3. Fix: Use voxel coordinates to update WarpField

* Fix non-OpenGL build

* Intialise newly discovered node transforms with DQB

* Fix data term

* Change data term normal and add kinfu-like distance/angle checks

* Implement regularisation

* Fix warnings

* Credit authors of DQB and table for MC

* cast size_t to int to supress warning

* Correct regularisation and add normal equation set up

* Delete html

* Dynafu unit test
parent 6a0e9fdd
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# DynaFu ICP Math\n",
"## Differentiating and Linearising Rt matrices\n",
"\n",
"In dynafu, the warp function looks like the following for each node $i$:\n",
"\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"f_i(x_i, V_g) = T_{x_i} * V_g = R(x_i) * V_g + t(x_i)\n",
"\\end{equation*}\n",
"$\n",
"\n",
"where ${x_i}$ are the transformation parameters for node $i$ and the rotation is performed around the corresponding node (and not a global reference)\n",
"\n",
"For linearising a transform around the parameters $\\mathbf{x}$, we need to find the derivative\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"\\displaystyle\n",
"\\frac{\\partial f_i(\\mathbf{x} \\circ \\epsilon, V_g)}{\\partial \\epsilon} |_{\\epsilon = 0}\n",
"\\end{equation*}\n",
"$\n",
"\n",
"We calculate this as follows:\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"f_i(\\mathbf{x} \\circ \\epsilon, V_g) = f_i(\\epsilon, V) = T_{inc} * V\n",
"\\end{equation*}\n",
"$ where $V = f_i(\\mathbf{x}, V_g)$ and $T_{inc}$ is the infinitesimal transform with parameters $\\epsilon$\n",
"\n",
"According to Lie algebra, each Rt matrix can be represented as $A = e^\\xi$ where $\\xi$ are the transform parameters. Therefore,\n",
"\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"f_i(\\mathbf{x}, V_g) = e^\\xi V\n",
"\\end{equation*}\n",
"$\n",
"\n",
"Therefore,\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"\\displaystyle\n",
"\\frac{\\partial f_i(\\mathbf{x} \\circ \\xi, V_g)}{\\partial \\xi} |_{\\xi = 0} =\n",
"\\frac{\\partial e^\\xi V}{\\partial \\xi} |_{\\xi=0} = \n",
"\\begin{pmatrix} -[V]_{\\times} & I_{3x3} \\end{pmatrix}_{3 \\times 6}\n",
"\\end{equation*}\n",
"$\n",
"\n",
"Let us denote $\\begin{pmatrix} -[V]_{\\times} & I_{3x3} \\end{pmatrix}$ as $G(V)$ from now on.\n",
"\n",
"This result is mentioned in [this manifold optimisation tutorial](http://ingmec.ual.es/~jlblanco/papers/jlblanco2010geometry3D_techrep.pdf) (equation 10.23).\n",
"\n",
"With this result, we can now linearise our transformation around $\\mathbf{x}$:\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"f_i(x_i, V_g) = G(V) * \\epsilon + V\n",
"\\end{equation*}\n",
"$\n",
"\n",
"\n",
"I suppose the following is an equivalent excerpt from the dynafu paper (Section about efficient optimisation) that mentions this way of calculating derivatives:\n",
"> We formulate compositional updates $\\hat x$ through the exponential map with a per-node twist $ξ_i ∈ se(3)$, requiring 6 variables per node transform, and perform linearisation around $ξ_i= 0$. \n",
"\n",
"As a side note, the derivative $\\large \\frac{\\partial e^\\xi}{\\partial \\xi}|_{\\xi=0}$ is called the tangent (esentially the derivative) to the SE(3) manifold (the space in which Rt matrix $T_{inc}$ exists) at identity ($\\xi = 0$)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimating Warp Field Parameters\n",
"The total energy to be minimised is \n",
"\n",
"$\n",
"E = E_{data} + \\lambda E_{reg}\n",
"$\n",
"\n",
"#### Data term rearrangement \n",
"$\n",
"\\displaystyle\n",
"E_{data} = \\sum_{u \\in \\Omega} \\rho_{Tukey}( (T_u N_g)^T ((T_u V_g) - V_c))\n",
"$\n",
"\n",
"The quadcopter paper tells us that the following expression has the same minimiser, so we can use this instead:\n",
"\n",
"$\n",
"\\displaystyle\n",
"E_{data} = \\sum_{u \\in \\Omega} w_{Tukey}(r_u) \\cdot (r_u)^2\n",
"$\n",
"\n",
"where $w_{Tukey}(x) = \\rho'(x)/x$ which behaves like a constant term and $r_u = N_g^T (V_g - T_u^{-1}\\cdot V_c)$\n",
"\n",
"#### Regularisation term rearrangement\n",
"$\n",
"\\begin{equation}\n",
"\\displaystyle\n",
"E_{reg} = \\sum_{i = 0}^n \\sum_{j \\in \\varepsilon(i)} \\alpha_{ij} \\rho_{Huber} (T_{i}V_g^j - T_{j}V_g^j)\n",
"\\end{equation}\n",
"$\n",
"\n",
"This needs to be changed to the form of weighted least squares to be useful. So incorporate the same rearrangement as the data term and sum over edges instead:\n",
"\n",
"$\n",
"\\begin{equation}\n",
"\\displaystyle\n",
"E_{reg} = \\sum_{e \\in E} w_{Huber}(r_e) (r_e)^2\n",
"\\end{equation}\n",
"$\n",
"\n",
"Here $E$ is the set of the directed edges in the regularisation graph between all nodes from current level and the next coarser level. And $w_{Huber}(x) = \\alpha_x \\rho'(x)/x$\n",
"\n",
"#### Obtaining normal equation\n",
"\n",
"Therefore to solve an iteration, we equate the derivative with 0\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"\\large\n",
"\\frac{\\partial E_{data}}{\\partial \\xi} + \\lambda \\frac{\\partial E_{reg}}{\\partial \\xi} = 0\n",
"\\end{equation*}\n",
"$\n",
"\n",
"which gives us\n",
"\n",
"$\n",
"\\begin{equation*}\n",
"J_d^T W_d(r_d + J_d\\mathbf{\\hat x}) + \\lambda J_r^T W_r (r_r + J_r\\mathbf{\\hat x}) = 0\n",
"\\end{equation*}\n",
"$\n",
"\n",
"$\n",
"(J_d^T W_d J_d + \\lambda J_r^T W_r J_r)\\mathbf{\\hat x} = -(J_d^T W_d r_d + \\lambda J_r^T W_r r_r)\n",
"$\n",
"\n",
"Here $W_d$ and $W_r$ are the weight matrices as described in quadcopter paper. However for $W_r, \\alpha$ is also incorporated in this matrix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculating Data Term Jacobian ($J_d$) \n",
"\n",
"We estimate the inverse of Rt matrices instead of the Rt matrices themselves. So firstly we have to write $T_u^{-1} V_g$ in terms of inverse matrices. However, I realised that\n",
"\n",
"$$\n",
"\\begin{equation*}\n",
"T_u^{-1} V_g \\ne \\sum_{k \\in N(V_u)} \\frac{w_k T_k^{-1}V_g}{w_k}\n",
"\\end{equation*}\n",
"$$\n",
"\n",
"Unfortunately, I could not find a representation of $T_u^{-1} V_g$ in terms of $T_k^{-1}V_g$ and got stuck here. Below is an approach without estimating the inverse Rt matrices, I think we can use that instead as the math is now fixed.\n",
"\n",
"### Alternative calculation for $J_d$\n",
"The residual $r_u$ in the data term is \n",
"\n",
"$$\n",
"r_u = (T_u N_g)^T (T_u V_g - V_c)\n",
"$$\n",
"\n",
"Let $a, b$ be column vectors such that $a = T_u N_g$ and $b = (T_u V_g - V_c)$. Now we can state the residual as \n",
"\n",
"$$\n",
"r_u = a^Tb\n",
"$$\n",
"\n",
"Each entry in $J_d$ for node paramter $x_j$ associated with node $j$ is:\n",
"\n",
"$$\n",
"(J_d)_{uj} = \\frac{\\partial r_u}{\\partial x_j} = \\frac{\\partial (a^Tb)}{\\partial x_j}\n",
"$$\n",
"\n",
"**Please note that numerator layout is assumed in all the derivatives**\n",
"\n",
"Applying chain rule for multiple variables, we get\n",
"\n",
"$$\n",
"\\begin{equation}\\begin{aligned}\n",
"\\frac{\\partial (a^Tb)}{\\partial x_j} & = \n",
"\\frac{\\partial (a^Tb)}{\\partial a} \\cdot \\frac{\\partial a}{\\partial x_j} +\n",
"\\frac{\\partial (a^Tb)}{\\partial b} \\cdot \\frac{\\partial b}{\\partial x_j} \\\\\n",
"& =\n",
"\\frac{\\partial (a^Tb)}{\\partial a} \\cdot \\frac{\\partial a}{\\partial x_j} +\n",
"\\frac{\\partial (b^Ta)}{\\partial b} \\cdot \\frac{\\partial b}{\\partial x_j} && \\text{Since $a^Tb = b^Ta$} \\\\\n",
"& =\n",
"b^T \\cdot \\frac{\\partial a}{\\partial x_j} +\n",
"a^T \\cdot \\frac{\\partial b}{\\partial x_j} && \\text{Since $\\frac{\\partial x^TA}{\\partial x} = A^T$}\n",
"\\end{aligned}\\end{equation}\\tag{1}\\label{1}\n",
"$$\n",
"\n",
"The identity $\\frac{\\partial x^TA}{\\partial x} = A^T$ is mentioned in [this wikipedia page](https://en.wikipedia.org/wiki/Matrix_calculus#Vector-by-vector_identities). Now we calculate $\\frac{\\partial a}{\\partial x_j}$ and $\\frac{\\partial b}{\\partial x_j}$ as follows:\n",
"\n",
"$$\n",
"\\begin{equation}\\begin{aligned}\n",
"\\frac{\\partial a}{\\partial x_j} & = \\frac{\\partial (T_u N_g)}{\\partial x_j} \\\\\n",
"& = \\sum_{k \\in N(V_u)} \\frac{w_k \\frac{\\partial T_k N_g}{\\partial x_j}}{w_k} \\\\\n",
"& = \n",
"\\begin{cases}\n",
" \\frac{w_j \\frac{\\partial T_j N_g}{\\partial x_j}}{\\sum_{k \\in N(V_u)} w_k} && \\text{if $j \\in N(V_u)$} \\\\\n",
" 0 && \\text{otherwise}\n",
"\\end{cases} \\\\\n",
"& = \n",
"\\begin{cases}\n",
" \\frac{w_j \\begin{pmatrix}-[T_j N_g]_\\times & I_{3\\times3}\\end{pmatrix}}{\\sum_{k \\in N(V_u)} w_k} && \\text{if $j \\in N(V_u)$} \\\\\n",
" 0 && \\text{otherwise}\n",
"\\end{cases}\n",
"\\end{aligned}\\end{equation}\\tag{2}\\label{2}\n",
"$$\n",
"\n",
"\n",
"$$\n",
"\\begin{equation}\\begin{aligned}\n",
"\\frac{\\partial b}{\\partial x_j} & = \\frac{\\partial (T_uV_g - V_c)}{\\partial x_j} \\\\\n",
"& = \\frac{\\partial T_uV_g}{\\partial x_j}\\\\\n",
"& = \n",
"\\begin{cases}\n",
" \\frac{w_j \\begin{pmatrix}-[T_j V_g]_\\times & I_{3\\times3}\\end{pmatrix}}{\\sum_{k \\in N(V_u)} w_k} && \\text{if $j \\in N(V_u)$} \\\\\n",
" 0 && \\text{otherwise}\n",
"\\end{cases} && \\text{Calculated similarly to ($\\ref{2}$)}\n",
"\\end{aligned}\\end{equation}\\tag{3}\\label{3}\n",
"$$\n",
"\n",
"Substituting equations $(\\ref{2})$, $(\\ref{3})$ as well as the values of $a^T$ and $b^T$ in $(\\ref{1})$, we obtain the required result:\n",
"\n",
"$$\n",
"(J_d)_{uj} = \n",
"\\begin{cases}\n",
"\\frac{w_j}{\\sum_{k \\in N(V_u)} w_k}\n",
"\\left(\n",
" (T_u V_g - V_c)^T\n",
" \\begin{pmatrix}-[T_j N_g] & I_{3\\times3}\\end{pmatrix} +\n",
" (T_u N_g)^T \\begin{pmatrix}-[T_j V_g] & I_{3\\times3}\\end{pmatrix}\n",
"\\right) & \\text{if} j \\in N(V_u) \\\\\n",
"0 & \\text{otherwise}\n",
"\\end{cases}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can simplify this expression further:\n",
"\n",
"$$\n",
"\\begin{equation*}\n",
"\\left(\n",
" (T_u V_g - V_c)^T \\begin{pmatrix}-[T_j N_g] & I_{3\\times3}\\end{pmatrix} +\n",
" (T_u N_g)^T \\begin{pmatrix}-[T_j V_g] & I_{3\\times3}\\end{pmatrix}\n",
"\\right) = \\\\\n",
"\\left [ \n",
"(T_u V_g - V_c)^T \\left ( -[T_j N_g]_\\otimes \\right ) \\mid (T_u V_g - V_c)^T\n",
" \\right ]\n",
"+\n",
"\\left [ \n",
"(T_u N_g)^T \\left ( -[T_j V_g]_\\otimes \\right ) \\mid (T_u N_g)^T\n",
" \\right ] = \\\\\n",
"\\left [ \n",
"(T_u V_g - V_c)^T \\left ( -[T_j N_g]_\\otimes \\right ) +\n",
"(T_u N_g)^T \\left ( -[T_j V_g]_\\otimes \\right )\n",
" \\mid (T_u V_g + T_u N_g - V_c )^T\n",
"\\right ] = \\\\\n",
"\\left [ \n",
"(T_u V_g - V_c)^T ( [T_j N_g]_\\otimes )^T +\n",
"(T_u N_g)^T ( [T_j V_g]_\\otimes )^T\n",
" \\mid (T_u V_g + T_u N_g - V_c )^T\n",
"\\right ] = \\\\\n",
"\\begin{bmatrix}\n",
"( [T_j N_g]_\\otimes )(T_u V_g - V_c) +\n",
"( [T_j V_g]_\\otimes )(T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c\n",
"\\end{bmatrix}^T = \\\\\n",
"\\begin{bmatrix}\n",
"( T_j N_g) \\times (T_u V_g - V_c) +\n",
"( T_j V_g) \\times (T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c\n",
"\\end{bmatrix}^T = \\\\\n",
"\\begin{bmatrix}\n",
"( T_j N_g) \\times (T_u V_g - V_c) +\n",
"( T_j V_g) \\times (T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c \n",
"\\end{bmatrix}^T\n",
"\\end{equation*}\n",
"$$\n",
"\n",
"So, we get the final expression as:\n",
"$$\n",
"\\begin{equation*}\n",
"(J_d)_{uj} = \n",
"\\begin{cases}\n",
"\\frac{w_j}{\\sum_{k \\in N(V_u)} w_k}\n",
"\\begin{bmatrix}\n",
"( T_j N_g) \\times (T_u V_g - V_c) +\n",
"( T_j V_g) \\times (T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c \n",
"\\end{bmatrix}^T\n",
"&\n",
"\\text{if} j \\in N(V_u) \\\\\n",
"0 & \\text{otherwise}\n",
"\\end{cases}\n",
"\\end{equation*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have expression for the Jacobian, we can form the normal equation corresponding to the data term of the ICP\n",
"\n",
"$$\n",
"\\begin{equation*}\n",
"\\\\\n",
"\\Omega \\text{ pixels, N nodes}\n",
"\\\\\n",
"J_d^T W_d J_d \\mathbf{\\hat x} = -J_d^T W_d r_d \\\\\n",
"\\underbrace{J_d^T}_{6N \\times\\Omega} \\underbrace {W_d}_{\\Omega \\times \\Omega}\n",
"\\underbrace{J_d}_{\\Omega \\times 6N} \\underbrace{\\mathbf{\\hat x}}_{6N \\times 1} =\n",
"-J_d^T W_d \\underbrace{r_d}_{\\Omega \\times 1} \\\\\n",
"\\underbrace{\\mathbf{A}_d}_{6N \\times 6N} \\mathbf{\\hat x} = \\underbrace{\\mathbf{b}_d}_{6N \\times 1} \\\\\n",
"\\\\\n",
"\\text {for each block (i, j) which are 6}\\times\\text{6:}\n",
"\\\\\n",
"(\\mathbf{A}_d)_{ij} = \\sum_{\\Omega} \\underbrace{w_d(\\Omega)}_{\\text{robust for pix}}\n",
"\\left(\\frac{\\partial E_d}{\\partial x_i}\\right)^T_\\Omega \\left(\\frac{\\partial E_d}{\\partial x_j}\\right)_\\Omega \\\\\n",
"\\\\\n",
"\\text {for each block j which are 6}\\times\\text{1:}\n",
"\\\\\n",
"(\\mathbf{b}_d)_{j} = - \\sum_{\\Omega} \\underbrace{w_d(\\Omega)}_{\\text{robust for pix}} r_d(\\Omega)\n",
"\\left(\\frac{\\partial E_d}{\\partial x_j}\\right)^T_\\Omega \\\\\n",
"(\\mathbf{b}_d)_{j} = - \\sum_{\\Omega} \\underbrace{w_d(\\Omega)}_{\\text{robust for pix}} ((T_u N_g)^T (T_u V_g - V_c))_{\\Omega}\n",
"\\left(\n",
" \\frac{w_j \\text{ or 0} }{\\sum_{k \\in N(V_u)} w_k}\n",
"\\begin{bmatrix}\n",
"( T_j N_g) \\times (T_u V_g - V_c) +\n",
"( T_j V_g) \\times (T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c \n",
"\\end{bmatrix}\n",
" \\right)_\\Omega \\\\\n",
"(\\mathbf{A}_d)_{ij} = \\sum_{\\Omega} \\underbrace{w_d(\\Omega)}_{\\text{robust for pix}}\n",
"\\left(\n",
"\\frac{(w_i\\text{ or 0})(w_j\\text{ or 0})}{(\\sum_{k \\in N(V_u)} w_k)^2}\n",
" \\underbrace{(A^T_{i} A_{j})}_{6 \\times 6}\n",
"\\right)_{\\Omega}\n",
"\\\\ \\\\\n",
"A_{i} =\n",
"\\begin{bmatrix}\n",
"( T_i N_g) \\times (T_u V_g - V_c) +\n",
"( T_i V_g) \\times (T_u N_g)\n",
"\\\\ \n",
"T_u V_g + T_u N_g - V_c \n",
"\\end{bmatrix}\n",
"\\end{equation*}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculating Regularisation Term Jacobian ($J_r$)\n",
"\n",
"Each row in $J_r$ corresponds to derivative of summand for each edge $e$ in the regularisation graph $\\epsilon$ and column $k$ corresponds to node $k$ with respect to which the derivative is calculated.\n",
"\n",
"$$\n",
"\\begin{equation*}\n",
"\\displaystyle\n",
"(J_r)_{ek} = \n",
"\\sum_{e_{ij} \\in \\epsilon} \\frac{\\partial ( T_iV_g^j - T_jV_g^j)}{\\partial x_k}\n",
"=\n",
"\\sum_{e_{ij} \\in \\epsilon} \\begin{cases}\n",
"\\begin{pmatrix} -[T_iV_g^j] & I_{3x3} \\end{pmatrix} & \\text {if } i = k \\\\\n",
"-\\begin{pmatrix} -[T_jV_g^j] & I_{3x3} \\end{pmatrix} & \\text {if } j = k \\\\\n",
"0 & \\text {otherwise}\n",
"\\end{cases}\n",
"\\end{equation*}\n",
"$$\n",
"\n",
"Using this Jacobian we can set up a normal equation corresponding to the regularisation term similarly to the data term as mentioned in the previous section"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
......@@ -16,3 +16,10 @@ volume = {},
chapter = {},
isbn = {978-1-4503-0716-1},
}
@inproceedings{dynamicfusion,
author = {Newcombe, Richard A. and Fox, Dieter and Seitz, Steven M.},
title = {DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
......@@ -12,6 +12,7 @@
#include "opencv2/rgbd/linemod.hpp"
#include "opencv2/rgbd/depth.hpp"
#include "opencv2/rgbd/kinfu.hpp"
#include "opencv2/rgbd/dynafu.hpp"
/** @defgroup rgbd RGB-Depth Processing
......
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#ifndef __OPENCV_RGBD_DYNAFU_HPP__
#define __OPENCV_RGBD_DYNAFU_HPP__
#include "opencv2/core.hpp"
#include "opencv2/core/affine.hpp"
namespace cv {
namespace dynafu {
struct CV_EXPORTS_W Params
{
/** @brief Default parameters
A set of parameters which provides better model quality, can be very slow.
*/
CV_WRAP static Ptr<Params> defaultParams();
/** @brief Coarse parameters
A set of parameters which provides better speed, can fail to match frames
in case of rapid sensor motion.
*/
CV_WRAP static Ptr<Params> coarseParams();
/** @brief frame size in pixels */
CV_PROP_RW Size frameSize;
/** @brief camera intrinsics */
CV_PROP Matx33f intr;
/** @brief pre-scale per 1 meter for input values
Typical values are:
* 5000 per 1 meter for the 16-bit PNG files of TUM database
* 1000 per 1 meter for Kinect 2 device
* 1 per 1 meter for the 32-bit float images in the ROS bag files
*/
CV_PROP_RW float depthFactor;
/** @brief Depth sigma in meters for bilateral smooth */
CV_PROP_RW float bilateral_sigma_depth;
/** @brief Spatial sigma in pixels for bilateral smooth */
CV_PROP_RW float bilateral_sigma_spatial;
/** @brief Kernel size in pixels for bilateral smooth */
CV_PROP_RW int bilateral_kernel_size;
/** @brief Number of pyramid levels for ICP */
CV_PROP_RW int pyramidLevels;
/** @brief Resolution of voxel space
Number of voxels in each dimension.
*/
CV_PROP_RW Vec3i volumeDims;
/** @brief Size of voxel in meters */
CV_PROP_RW float voxelSize;
/** @brief Minimal camera movement in meters
Integrate new depth frame only if camera movement exceeds this value.
*/
CV_PROP_RW float tsdf_min_camera_movement;
/** @brief initial volume pose in meters */
Affine3f volumePose;
/** @brief distance to truncate in meters
Distances to surface that exceed this value will be truncated to 1.0.
*/
CV_PROP_RW float tsdf_trunc_dist;
/** @brief max number of frames per voxel
Each voxel keeps running average of distances no longer than this value.
*/
CV_PROP_RW int tsdf_max_weight;
/** @brief A length of one raycast step
How much voxel sizes we skip each raycast step
*/
CV_PROP_RW float raycast_step_factor;
// gradient delta in voxel sizes
// fixed at 1.0f
// float gradient_delta_factor;
/** @brief light pose for rendering in meters */
CV_PROP Vec3f lightPose;
/** @brief distance theshold for ICP in meters */
CV_PROP_RW float icpDistThresh;
/** angle threshold for ICP in radians */
CV_PROP_RW float icpAngleThresh;
/** number of ICP iterations for each pyramid level */
CV_PROP std::vector<int> icpIterations;
/** @brief Threshold for depth truncation in meters
All depth values beyond this threshold will be set to zero
*/
CV_PROP_RW float truncateThreshold;
};
/** @brief DynamicFusion implementation
This class implements a 3d reconstruction algorithm as described in @cite dynamicfusion.
It takes a sequence of depth images taken from depth sensor
(or any depth images source such as stereo camera matching algorithm or even raymarching renderer).
The output can be obtained as a vector of points and their normals
or can be Phong-rendered from given camera pose.
It extends the KinectFusion algorithm to handle non-rigidly deforming scenes by maintaining a sparse
set of nodes covering the geometry such that each node contains a warp to transform it from a canonical
space to the live frame.
An internal representation of a model is a voxel cuboid that keeps TSDF values
which are a sort of distances to the surface (for details read the @cite kinectfusion article about TSDF).
There is no interface to that representation yet.
Note that DynamicFusion is based on the KinectFusion algorithm which is patented and its use may be
restricted by the list of patents mentioned in README.md file in this module directory.
That's why you need to set the OPENCV_ENABLE_NONFREE option in CMake to use DynamicFusion.
*/
class CV_EXPORTS_W DynaFu
{
public:
CV_WRAP static Ptr<DynaFu> create(const Ptr<Params>& _params);
virtual ~DynaFu();
/** @brief Get current parameters */
virtual const Params& getParams() const = 0;
/** @brief Renders a volume into an image
Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat.
Light pose is fixed in DynaFu params.
@param image resulting image
@param cameraPose pose of camera to render from. If empty then render from current pose
which is a last frame camera pose.
*/
CV_WRAP virtual void render(OutputArray image, const Matx44f& cameraPose = Matx44f::eye()) const = 0;
/** @brief Gets points and normals of current 3d mesh
The order of normals corresponds to order of points.
The order of points is undefined.
@param points vector of points which are 4-float vectors
@param normals vector of normals which are 4-float vectors
*/
CV_WRAP virtual void getCloud(OutputArray points, OutputArray normals) const = 0;
/** @brief Gets points of current 3d mesh
The order of points is undefined.
@param points vector of points which are 4-float vectors
*/
CV_WRAP virtual void getPoints(OutputArray points) const = 0;
/** @brief Calculates normals for given points
@param points input vector of points which are 4-float vectors
@param normals output vector of corresponding normals which are 4-float vectors
*/
CV_WRAP virtual void getNormals(InputArray points, OutputArray normals) const = 0;
/** @brief Resets the algorithm
Clears current model and resets a pose.
*/
CV_WRAP virtual void reset() = 0;
/** @brief Get current pose in voxel space */
virtual const Affine3f getPose() const = 0;
/** @brief Process next depth frame
Integrates depth into voxel space with respect to its ICP-calculated pose.
Input image is converted to CV_32F internally if has another type.
@param depth one-channel image which size and depth scale is described in algorithm's parameters
@return true if succeded to align new frame with current scene, false if opposite
*/
CV_WRAP virtual bool update(InputArray depth) = 0;
virtual std::vector<Point3f> getNodesPos() const = 0;
virtual void marchCubes(OutputArray vertices, OutputArray edges) const = 0;
virtual void renderSurface(OutputArray depthImage, OutputArray vertImage, OutputArray normImage, bool warp=true) = 0;
};
//! @}
}
}
#endif
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#define CV_LOG_STRIP_LEVEL CV_LOG_LEVEL_VERBOSE
#include <iostream>
#include <fstream>
#include <opencv2/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/core/utils/logger.hpp>
#include <opencv2/rgbd.hpp>
using namespace cv;
using namespace cv::dynafu;
using namespace std;
#ifdef HAVE_OPENCV_VIZ
#include <opencv2/viz.hpp>
#endif
static vector<string> readDepth(std::string fileList);
static vector<string> readDepth(std::string fileList)
{
vector<string> v;
fstream file(fileList);
if(!file.is_open())
throw std::runtime_error("Failed to read depth list");
std::string dir;
size_t slashIdx = fileList.rfind('/');
slashIdx = slashIdx != std::string::npos ? slashIdx : fileList.rfind('\\');
dir = fileList.substr(0, slashIdx);
while(!file.eof())
{
std::string s, imgPath;
std::getline(file, s);
if(s.empty() || s[0] == '#') continue;
std::stringstream ss;
ss << s;
double thumb;
ss >> thumb >> imgPath;
v.push_back(dir+'/'+imgPath);
}
return v;
}
struct DepthWriter
{
DepthWriter(string fileList) :
file(fileList, ios::out), count(0), dir()
{
size_t slashIdx = fileList.rfind('/');
slashIdx = slashIdx != std::string::npos ? slashIdx : fileList.rfind('\\');
dir = fileList.substr(0, slashIdx);
if(!file.is_open())
throw std::runtime_error("Failed to write depth list");
file << "# depth maps saved from device" << endl;
file << "# useless_number filename" << endl;
}
void append(InputArray _depth)
{
Mat depth = _depth.getMat();
string depthFname = cv::format("%04d.png", count);
string fullDepthFname = dir + '/' + depthFname;
if(!imwrite(fullDepthFname, depth))
throw std::runtime_error("Failed to write depth to file " + fullDepthFname);
file << count++ << " " << depthFname << endl;
}
fstream file;
int count;
string dir;
};
namespace Kinect2Params
{
static const Size frameSize = Size(512, 424);
// approximate values, no guarantee to be correct
static const float focal = 366.1f;
static const float cx = 258.2f;
static const float cy = 204.f;
static const float k1 = 0.12f;
static const float k2 = -0.34f;
static const float k3 = 0.12f;
};
struct DepthSource
{
public:
DepthSource(int cam) :
DepthSource("", cam)
{ }
DepthSource(String fileListName) :
DepthSource(fileListName, -1)
{ }
DepthSource(String fileListName, int cam) :
depthFileList(fileListName.empty() ? vector<string>() : readDepth(fileListName)),
frameIdx(0),
vc( cam >= 0 ? VideoCapture(VideoCaptureAPIs::CAP_OPENNI2 + cam) : VideoCapture()),
undistortMap1(),
undistortMap2(),
useKinect2Workarounds(true)
{
}
UMat getDepth()
{
UMat out;
if (!vc.isOpened())
{
if (frameIdx < depthFileList.size())
{
Mat f = cv::imread(depthFileList[frameIdx++], IMREAD_ANYDEPTH);
f.copyTo(out);
}
else
{
return UMat();
}
}
else
{
vc.grab();
vc.retrieve(out, CAP_OPENNI_DEPTH_MAP);
// workaround for Kinect 2
if(useKinect2Workarounds)
{
out = out(Rect(Point(), Kinect2Params::frameSize));
UMat outCopy;
// linear remap adds gradient between valid and invalid pixels
// which causes garbage, use nearest instead
remap(out, outCopy, undistortMap1, undistortMap2, cv::INTER_NEAREST);
cv::flip(outCopy, out, 1);
}
}
if (out.empty())
throw std::runtime_error("Matrix is empty");
return out;
}
bool empty()
{
return depthFileList.empty() && !(vc.isOpened());
}
void updateParams(Params& params)
{
if (vc.isOpened())
{
// this should be set in according to user's depth sensor
int w = (int)vc.get(VideoCaptureProperties::CAP_PROP_FRAME_WIDTH);
int h = (int)vc.get(VideoCaptureProperties::CAP_PROP_FRAME_HEIGHT);
float focal = (float)vc.get(CAP_OPENNI_DEPTH_GENERATOR | CAP_PROP_OPENNI_FOCAL_LENGTH);
// it's recommended to calibrate sensor to obtain its intrinsics
float fx, fy, cx, cy;
Size frameSize;
if(useKinect2Workarounds)
{
fx = fy = Kinect2Params::focal;
cx = Kinect2Params::cx;
cy = Kinect2Params::cy;
frameSize = Kinect2Params::frameSize;
}
else
{
fx = fy = focal;
cx = w/2 - 0.5f;
cy = h/2 - 0.5f;
frameSize = Size(w, h);
}
Matx33f camMatrix = Matx33f(fx, 0, cx,
0, fy, cy,
0, 0, 1);
params.frameSize = frameSize;
params.intr = camMatrix;
params.depthFactor = 1000.f;
Matx<float, 1, 5> distCoeffs;
distCoeffs(0) = Kinect2Params::k1;
distCoeffs(1) = Kinect2Params::k2;
distCoeffs(4) = Kinect2Params::k3;
if(useKinect2Workarounds)
initUndistortRectifyMap(camMatrix, distCoeffs, cv::noArray(),
camMatrix, frameSize, CV_16SC2,
undistortMap1, undistortMap2);
}
}
vector<string> depthFileList;
size_t frameIdx;
VideoCapture vc;
UMat undistortMap1, undistortMap2;
bool useKinect2Workarounds;
};
#ifdef HAVE_OPENCV_VIZ
const std::string vizWindowName = "cloud";
struct PauseCallbackArgs
{
PauseCallbackArgs(DynaFu& _df) : df(_df)
{ }
DynaFu& df;
};
void pauseCallback(const viz::MouseEvent& me, void* args);
void pauseCallback(const viz::MouseEvent& me, void* args)
{
if(me.type == viz::MouseEvent::Type::MouseMove ||
me.type == viz::MouseEvent::Type::MouseScrollDown ||
me.type == viz::MouseEvent::Type::MouseScrollUp)
{
PauseCallbackArgs pca = *((PauseCallbackArgs*)(args));
viz::Viz3d window(vizWindowName);
UMat rendered;
pca.df.render(rendered, window.getViewerPose().matrix);
imshow("render", rendered);
waitKey(1);
}
}
#endif
static const char* keys =
{
"{help h usage ? | | print this message }"
"{depth | | Path to depth.txt file listing a set of depth images }"
"{camera |0| Index of depth camera to be used as a depth source }"
"{coarse | | Run on coarse settings (fast but ugly) or on default (slow but looks better),"
" in coarse mode points and normals are displayed }"
"{idle | | Do not run DynaFu, just display depth frames }"
"{record | | Write depth frames to specified file list"
" (the same format as for the 'depth' key) }"
};
static const std::string message =
"\nThis demo uses live depth input or RGB-D dataset taken from"
"\nhttps://vision.in.tum.de/data/datasets/rgbd-dataset"
"\nto demonstrate KinectFusion implementation \n";
int main(int argc, char **argv)
{
bool coarse = false;
bool idle = false;
string recordPath;
CommandLineParser parser(argc, argv, keys);
parser.about(message);
if(!parser.check())
{
parser.printMessage();
parser.printErrors();
return -1;
}
if(parser.has("help"))
{
parser.printMessage();
return 0;
}
if(parser.has("coarse"))
{
coarse = true;
}
if(parser.has("record"))
{
recordPath = parser.get<String>("record");
}
if(parser.has("idle"))
{
idle = true;
}
Ptr<DepthSource> ds;
if (parser.has("depth"))
ds = makePtr<DepthSource>(parser.get<String>("depth"));
else
ds = makePtr<DepthSource>(parser.get<int>("camera"));
if (ds->empty())
{
std::cerr << "Failed to open depth source" << std::endl;
parser.printMessage();
return -1;
}
Ptr<DepthWriter> depthWriter;
if(!recordPath.empty())
depthWriter = makePtr<DepthWriter>(recordPath);
Ptr<Params> params;
Ptr<DynaFu> df;
if(coarse)
params = Params::coarseParams();
else
params = Params::defaultParams();
// These params can be different for each depth sensor
ds->updateParams(*params);
// Enables OpenCL explicitly (by default can be switched-off)
cv::setUseOptimized(false);
// Scene-specific params should be tuned for each scene individually
//params->volumePose = params->volumePose.translate(Vec3f(0.f, 0.f, 0.5f));
//params->tsdf_max_weight = 16;
namedWindow("OpenGL Window", WINDOW_OPENGL);
resizeWindow("OpenGL Window", 1, 1);
if(!idle)
df = DynaFu::create(params);
#ifdef HAVE_OPENCV_VIZ
cv::viz::Viz3d window(vizWindowName);
window.setViewerPose(Affine3f::Identity());
bool pause = false;
#endif
UMat rendered;
UMat points;
UMat normals;
int64 prevTime = getTickCount();
for(UMat frame = ds->getDepth(); !frame.empty(); frame = ds->getDepth())
{
Mat depthImg, vertImg, normImg;
setOpenGlContext("OpenGL Window");
df->renderSurface(depthImg, vertImg, normImg);
if(!depthImg.empty())
{
UMat depthCvt8, vertCvt8, normCvt8;
convertScaleAbs(depthImg, depthCvt8, 0.33*255);
vertImg.convertTo(vertCvt8, CV_8UC3, 255);
normImg.convertTo(normCvt8, CV_8UC3, 255);
imshow("Surface prediction", depthCvt8);
imshow("vertex prediction", vertCvt8);
imshow("normal prediction", normCvt8);
}
if(depthWriter)
depthWriter->append(frame);
#ifdef HAVE_OPENCV_VIZ
if(pause)
{
// doesn't happen in idle mode
df->getCloud(points, normals);
if(!points.empty() && !normals.empty())
{
viz::WCloud cloudWidget(points, viz::Color::white());
viz::WCloudNormals cloudNormals(points, normals, /*level*/1, /*scale*/0.05, viz::Color::gray());
Vec3d volSize = df->getParams().voxelSize*Vec3d(df->getParams().volumeDims);
window.showWidget("cube", viz::WCube(Vec3d::all(0),
volSize),
df->getParams().volumePose);
PauseCallbackArgs pca(*df);
window.registerMouseCallback(pauseCallback, (void*)&pca);
window.showWidget("text", viz::WText(cv::String("Move camera in this window. "
"Close the window or press Q to resume"), Point()));
window.spin();
window.removeWidget("text");
//window.removeWidget("cloud");
//window.removeWidget("normals");
window.registerMouseCallback(0);
}
pause = false;
}
else
#endif
{
UMat cvt8;
float depthFactor = params->depthFactor;
convertScaleAbs(frame, cvt8, 0.25*256. / depthFactor);
if(!idle)
{
imshow("depth", cvt8);
if(!df->update(frame))
{
df->reset();
std::cout << "reset" << std::endl;
}
#ifdef HAVE_OPENCV_VIZ
else
{
Mat meshCloud, meshEdges, meshPoly;
df->marchCubes(meshCloud, meshEdges);
for(int i = 0; i < meshEdges.size().height; i += 3)
{
meshPoly.push_back<int>(3);
meshPoly.push_back<int>(meshEdges.at<int>(i, 0));
meshPoly.push_back<int>(meshEdges.at<int>(i+1, 0));
meshPoly.push_back<int>(meshEdges.at<int>(i+2, 0));
}
viz::WMesh mesh(meshCloud.t(), meshPoly);
window.showWidget("mesh", mesh);
if(coarse)
{
df->getCloud(points, normals);
if(!points.empty() && !normals.empty())
{
viz::WCloud cloudWidget(points, viz::Color::white());
viz::WCloudNormals cloudNormals(points, normals, /*level*/1, /*scale*/0.05, viz::Color::gray());
//window.showWidget("cloud", cloudWidget);
//window.showWidget("normals", cloudNormals);
if(!df->getNodesPos().empty())
{
viz::WCloud nodeCloud(df->getNodesPos(), viz::Color::red());
nodeCloud.setRenderingProperty(viz::POINT_SIZE, 4);
window.showWidget("nodes", nodeCloud);
}
}
}
//window.showWidget("worldAxes", viz::WCoordinateSystem());
Vec3d volSize = df->getParams().voxelSize*df->getParams().volumeDims;
window.showWidget("cube", viz::WCube(Vec3d::all(0),
volSize),
df->getParams().volumePose);
window.setViewerPose(df->getPose());
window.spinOnce(1, true);
}
#endif
df->render(rendered);
}
else
{
rendered = cvt8;
}
}
int64 newTime = getTickCount();
putText(rendered, cv::format("FPS: %2d press R to reset, P to pause, Q to quit",
(int)(getTickFrequency()/(newTime - prevTime))),
Point(0, rendered.rows-1), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 255));
prevTime = newTime;
imshow("render", rendered);
int c = waitKey(1);
switch (c)
{
case 'r':
if(!idle)
df->reset();
break;
case 'q':
return 0;
#ifdef HAVE_OPENCV_VIZ
case 'p':
if(!idle)
pause = true;
#endif
default:
break;
}
}
return 0;
}
#include "dqb.hpp"
namespace cv {
namespace dynafu {
Quaternion::Quaternion() : coeff(Vec4f(0.f, 0.f, 0.f, 0.f))
{}
Quaternion::Quaternion(float w, float i, float j, float k) : coeff(Vec4f(w, i, j, k))
{}
Quaternion::Quaternion(const Affine3f& r)
{
// Compute trace of matrix
float T = (float)trace(r.matrix);
float S, X, Y, Z, W;
if ( T > 0.00000001f ) // to avoid large distortions!
{
S = sqrt(T) * 2.f;
X = (r.matrix(1, 2) - r.matrix(2, 1)) / S;
Y = (r.matrix(2, 0) - r.matrix(0, 2)) / S;
Z = (r.matrix(0, 1) - r.matrix(1, 0)) / S;
W = 0.25f * S;
}
else
{
if (r.matrix(0, 0) > r.matrix(1, 1) && r.matrix(0, 0) > r.matrix(2, 2))
{
// Column 0 :
S = sqrt(1.0f + r.matrix(0,0) - r.matrix(1,1) - r.matrix(2,2)) * 2.f;
X = 0.25f * S;
Y = (r.matrix(1, 0) + r.matrix(0, 1)) / S;
Z = (r.matrix(0, 2) + r.matrix(2, 0)) / S;
W = (r.matrix(2, 1) - r.matrix(1, 2)) / S;
}
else if (r.matrix(1, 1) > r.matrix(2, 2))
{
// Column 1 :
S = sqrt(1.0f + r.matrix(1,1) - r.matrix(0,0) - r.matrix(2,2)) * 2.f;
X = (r.matrix(1, 0) + r.matrix(0, 1)) / S;
Y = 0.25f * S;
Z = (r.matrix(2, 1) + r.matrix(1, 2)) / S;
W = (r.matrix(0, 2) - r.matrix(2, 0)) / S;
}
else
{ // Column 2 :
S = sqrt( 1.0f + r.matrix(2, 2) - r.matrix(0, 0) - r.matrix(1, 1)) * 2.f;
X = (r.matrix(0, 2) + r.matrix(2, 0)) / S;
Y = (r.matrix(2, 1) + r.matrix(1, 2)) / S;
Z = 0.25f * S;
W = (r.matrix(1,0) - r.matrix(0, 1)) / S;
}
}
coeff = Vec4f(W, -X, -Y, -Z);
}
Affine3f Quaternion::getRotation() const
{
float W = coeff[0], X = -coeff[1], Y = -coeff[2], Z = -coeff[3];
float xx = X * X, xy = X * Y, xz = X * Z, xw = X * W;
float yy = Y * Y, yz = Y * Z, yw = Y * W, zz = Z * Z;
float zw = Z * W;
Matx33f rot(1.f - 2.f * (yy + zz), 2.f * (xy + zw), 2.f * (xz - yw),
2.f * (xy - zw), 1.f - 2.f * (xx + zz), 2.f * (yz + xw),
2.f * (xz + yw), 2.f * (yz - xw), 1.f - 2.f * (xx + yy));
Affine3f Rt = Affine3f(rot, Vec3f::all(0));
return Rt;
}
Quaternion operator*(float a, const Quaternion& q)
{
Vec4f newQ = a*q.coeff;
return Quaternion(newQ[0], newQ[1], newQ[2], newQ[3]);
}
Quaternion operator*(const Quaternion& q, float a)
{
return a*q;
}
Quaternion operator/(const Quaternion& q, float a)
{
Vec4f newQ = q.coeff/a;
return Quaternion(newQ[0], newQ[1], newQ[2], newQ[3]);
}
Quaternion operator+(const Quaternion& q1, const Quaternion& q2)
{
Vec4f newQ = q1.coeff + q2.coeff;
return Quaternion(newQ[0], newQ[1], newQ[2], newQ[3]);
}
Quaternion& operator+=(Quaternion& q1, const Quaternion& q2)
{
q1.coeff += q2.coeff;
return q1;
}
Quaternion& operator/=(Quaternion& q, float a)
{
q.coeff /= a;
return q;
}
DualQuaternion::DualQuaternion() : q0(), qe()
{}
DualQuaternion::DualQuaternion(const Affine3f& rt)
{
q0 = Quaternion(rt);
Vec3f t = rt.translation();
float w = -0.5f*( t[0] * q0.i() + t[1] * q0.j() + t[2] * q0.k());
float i = 0.5f*( t[0] * q0.w() + t[1] * q0.k() - t[2] * q0.j());
float j = 0.5f*(-t[0] * q0.k() + t[1] * q0.w() + t[2] * q0.i());
float k = 0.5f*( t[0] * q0.j() - t[1] * q0.i() + t[2] * q0.w());
qe = Quaternion(w, i, j, k);
}
DualQuaternion::DualQuaternion(Quaternion& _q0, Quaternion& _qe) : q0(_q0), qe(_qe)
{}
void DualQuaternion::normalize()
{
float n = q0.normalize();
qe /= n;
}
DualQuaternion& operator+=(DualQuaternion& q1, const DualQuaternion& q2)
{
q1.q0 += q2.q0;
q1.qe += q2.qe;
return q1;
}
DualQuaternion operator*(float a, const DualQuaternion& q)
{
Quaternion newQ0 = a*q.q0;
Quaternion newQe = a*q.qe;
return DualQuaternion(newQ0, newQe);
}
Affine3f DualQuaternion::getAffine() const
{
float norm = q0.norm();
Affine3f Rt = (q0/norm).getRotation();
Vec3f t(0.f, 0.f, 0.f);
t[0] = 2.f*(-qe.w()*q0.i() + qe.i()*q0.w() - qe.j()*q0.k() + qe.k()*q0.j()) / norm;
t[1] = 2.f*(-qe.w()*q0.j() + qe.i()*q0.k() + qe.j()*q0.w() - qe.k()*q0.i()) / norm;
t[2] = 2.f*(-qe.w()*q0.k() - qe.i()*q0.j() + qe.j()*q0.i() + qe.k()*q0.w()) / norm;
return Rt.translate(t);
}
DualQuaternion DQB(std::vector<float>& weights, std::vector<DualQuaternion>& quats)
{
size_t n = weights.size();
DualQuaternion blended;
for(size_t i = 0; i < n; i++)
blended += weights[i] * quats[i];
blended.normalize();
return blended;
}
Affine3f DQB(std::vector<float>& weights, std::vector<Affine3f>& transforms)
{
size_t n = transforms.size();
std::vector<DualQuaternion> quats(n);
std::transform(transforms.begin(), transforms.end(),
quats.begin(), [](const Affine3f& rt){return DualQuaternion(rt);});
DualQuaternion blended = DQB(weights, quats);
return blended.getAffine();
}
} // namespace dynafu
} // namespace cv
\ No newline at end of file
/*
The code for Dual Quaternion Blending provided here is a modified
version of the sample codes by Arkan.
Copyright (c) 2019 Arkan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#ifndef __OPENCV_RGBD_DQB_HPP__
#define __OPENCV_RGBD_DQB_HPP__
#include "opencv2/core.hpp"
#include "opencv2/core/affine.hpp"
namespace cv {
namespace dynafu {
class Quaternion
{
public:
Quaternion();
Quaternion(float w, float i, float j, float k);
// Generate a quaternion from rotation of a Rt matrix.
Quaternion(const Affine3f& r);
float normalize()
{
float n = (float)cv::norm(coeff);
coeff /= n;
return n;
}
Affine3f getRotation() const;
float w() const {return coeff[0];}
float i() const {return coeff[1];}
float j() const {return coeff[2];}
float k() const {return coeff[3];}
float norm() const {return (float)cv::norm(coeff);}
friend Quaternion operator*(float a, const Quaternion& q);
friend Quaternion operator*(const Quaternion& q, float a);
friend Quaternion operator/(const Quaternion& q, float a);
friend Quaternion operator+(const Quaternion& q1, const Quaternion& q2);
friend Quaternion& operator+=(Quaternion& q1, const Quaternion& q2);
friend Quaternion& operator/=(Quaternion& q, float a);
private:
// w, i, j, k coefficients
Vec4f coeff;
};
class DualQuaternion
{
public:
DualQuaternion();
DualQuaternion(const Affine3f& Rt);
DualQuaternion(Quaternion& q0, Quaternion& qe);
void normalize();
friend DualQuaternion& operator+=(DualQuaternion& q1, const DualQuaternion& q2);
friend DualQuaternion operator*(float a, const DualQuaternion& q);
Affine3f getAffine() const;
private:
Quaternion q0; // rotation quaternion
Quaternion qe; // translation quaternion
};
DualQuaternion DQB(std::vector<float>& weights, std::vector<DualQuaternion>& quats);
Affine3f DQB(std::vector<float>& weights, std::vector<Affine3f>& transforms);
} // namespace dynafu
} // namespace cv
#endif
\ No newline at end of file
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#include "precomp.hpp"
#include "dynafu_tsdf.hpp"
#include "warpfield.hpp"
#include "fast_icp.hpp"
#include "nonrigid_icp.hpp"
#include "kinfu_frame.hpp"
#include "opencv2/core/opengl.hpp"
#ifdef HAVE_OPENGL
#define GL_GLEXT_PROTOTYPES
#ifdef __APPLE__
# include <OpenGL/gl.h>
#else
#ifdef _WIN32
# define WIN32_LEAN_AND_MEAN
# include <windows.h>
#endif
# include <GL/gl.h>
#endif
#else
# define NO_OGL_ERR CV_Error(cv::Error::OpenGlNotSupported, \
"OpenGL support not enabled. Please rebuild the library with OpenGL support");
#endif
namespace cv {
namespace dynafu {
using namespace kinfu;
Ptr<Params> Params::defaultParams()
{
Params p;
p.frameSize = Size(640, 480);
float fx, fy, cx, cy;
fx = fy = 525.f;
cx = p.frameSize.width/2 - 0.5f;
cy = p.frameSize.height/2 - 0.5f;
p.intr = Matx33f(fx, 0, cx,
0, fy, cy,
0, 0, 1);
// 5000 for the 16-bit PNG files
// 1 for the 32-bit float images in the ROS bag files
p.depthFactor = 5000;
// sigma_depth is scaled by depthFactor when calling bilateral filter
p.bilateral_sigma_depth = 0.04f; //meter
p.bilateral_sigma_spatial = 4.5; //pixels
p.bilateral_kernel_size = 7; //pixels
p.icpAngleThresh = (float)(30. * CV_PI / 180.); // radians
p.icpDistThresh = 0.1f; // meters
p.icpIterations = {10, 5, 4};
p.pyramidLevels = (int)p.icpIterations.size();
p.tsdf_min_camera_movement = 0.f; //meters, disabled
p.volumeDims = Vec3i::all(512); //number of voxels
float volSize = 3.f;
p.voxelSize = volSize/512.f; //meters
// default pose of volume cube
p.volumePose = Affine3f().translate(Vec3f(-volSize/2.f, -volSize/2.f, 0.5f));
p.tsdf_trunc_dist = 0.04f; //meters;
p.tsdf_max_weight = 64; //frames
p.raycast_step_factor = 0.25f; //in voxel sizes
// gradient delta factor is fixed at 1.0f and is not used
//p.gradient_delta_factor = 0.5f; //in voxel sizes
//p.lightPose = p.volume_pose.translation()/4; //meters
p.lightPose = Vec3f::all(0.f); //meters
// depth truncation is not used by default but can be useful in some scenes
p.truncateThreshold = 0.f; //meters
return makePtr<Params>(p);
}
Ptr<Params> Params::coarseParams()
{
Ptr<Params> p = defaultParams();
p->icpIterations = {5, 3, 2};
p->pyramidLevels = (int)p->icpIterations.size();
float volSize = 3.f;
p->volumeDims = Vec3i::all(128); //number of voxels
p->voxelSize = volSize/128.f;
p->raycast_step_factor = 0.75f; //in voxel sizes
return p;
}
// T should be Mat or UMat
template< typename T >
class DynaFuImpl : public DynaFu
{
public:
DynaFuImpl(const Params& _params);
virtual ~DynaFuImpl();
const Params& getParams() const CV_OVERRIDE;
void render(OutputArray image, const Matx44f& cameraPose) const CV_OVERRIDE;
void getCloud(OutputArray points, OutputArray normals) const CV_OVERRIDE;
void getPoints(OutputArray points) const CV_OVERRIDE;
void getNormals(InputArray points, OutputArray normals) const CV_OVERRIDE;
void reset() CV_OVERRIDE;
const Affine3f getPose() const CV_OVERRIDE;
bool update(InputArray depth) CV_OVERRIDE;
bool updateT(const T& depth);
std::vector<Point3f> getNodesPos() const CV_OVERRIDE;
void marchCubes(OutputArray vertices, OutputArray edges) const CV_OVERRIDE;
void renderSurface(OutputArray depthImage, OutputArray vertImage, OutputArray normImage, bool warp=true) CV_OVERRIDE;
private:
Params params;
cv::Ptr<ICP> icp;
cv::Ptr<NonRigidICP> dynafuICP;
cv::Ptr<TSDFVolume> volume;
int frameCounter;
Affine3f pose;
std::vector<T> pyrPoints;
std::vector<T> pyrNormals;
WarpField warpfield;
#ifdef HAVE_OPENGL
ogl::Arrays arr;
ogl::Buffer idx;
#endif
void drawScene(OutputArray depthImg, OutputArray shadedImg);
};
template< typename T>
std::vector<Point3f> DynaFuImpl<T>::getNodesPos() const {
NodeVectorType nv = warpfield.getNodes();
std::vector<Point3f> nodesPos;
for(auto n: nv)
nodesPos.push_back(n->pos);
return nodesPos;
}
template< typename T >
DynaFuImpl<T>::DynaFuImpl(const Params &_params) :
params(_params),
icp(makeICP(params.intr, params.icpIterations, params.icpAngleThresh, params.icpDistThresh)),
dynafuICP(makeNonRigidICP(params.intr, volume, 2)),
volume(makeTSDFVolume(params.volumeDims, params.voxelSize, params.volumePose,
params.tsdf_trunc_dist, params.tsdf_max_weight,
params.raycast_step_factor)),
pyrPoints(), pyrNormals(), warpfield()
{
#ifdef HAVE_OPENGL
// Bind framebuffer for off-screen rendering
unsigned int fbo_depth;
glGenRenderbuffersEXT(1, &fbo_depth);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, fbo_depth);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, params.frameSize.width, params.frameSize.height);
unsigned int fbo;
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, fbo_depth);
// Make a color attachment to this framebuffer
unsigned int fbo_color;
glGenRenderbuffersEXT(1, &fbo_color);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, fbo_color);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, params.frameSize.width, params.frameSize.height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, fbo_color);
#endif
reset();
}
template< typename T >
void DynaFuImpl<T>::drawScene(OutputArray depthImage, OutputArray shadedImage)
{
#ifdef HAVE_OPENGL
glViewport(0, 0, params.frameSize.width, params.frameSize.height);
glEnable(GL_DEPTH_TEST);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float fovX = params.frameSize.width/params.intr(0, 0);
float fovY = params.frameSize.height/params.intr(1, 1);
Vec3f t;
t = params.volumePose.translation();
double nearZ = t[2];
double farZ = params.volumeDims[2] * params.voxelSize + nearZ;
// Define viewing volume
glFrustum(-nearZ*fovX/2, nearZ*fovX/2, -nearZ*fovY/2, nearZ*fovY/2, nearZ, farZ);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.f, 1.f, -1.f); //Flip Z as camera points towards -ve Z axis
ogl::render(arr, idx, ogl::TRIANGLES);
float f[params.frameSize.width*params.frameSize.height];
float pixels[params.frameSize.width*params.frameSize.height][3];
glReadPixels(0, 0, params.frameSize.width, params.frameSize.height, GL_DEPTH_COMPONENT, GL_FLOAT, &f[0]);
glReadPixels(0, 0, params.frameSize.width, params.frameSize.height, GL_RGB, GL_FLOAT, &pixels[0][0]);
// linearise depth
for(int i = 0; i < params.frameSize.width*params.frameSize.height; i++)
{
f[i] = farZ * nearZ / (f[i]*(nearZ - farZ) + farZ);
if(f[i] >= farZ)
f[i] = std::numeric_limits<float>::quiet_NaN();
}
if(depthImage.needed()) {
Mat depthData(params.frameSize.height, params.frameSize.width, CV_32F, f);
depthData.copyTo(depthImage);
}
if(shadedImage.needed()) {
Mat shadeData(params.frameSize.height, params.frameSize.width, CV_32FC3, pixels);
shadeData.copyTo(shadedImage);
}
#else
CV_UNUSED(depthImage);
CV_UNUSED(shadedImage);
NO_OGL_ERR;
#endif
}
template< typename T >
void DynaFuImpl<T>::reset()
{
frameCounter = 0;
pose = Affine3f::Identity();
warpfield.setAllRT(Affine3f::Identity());
volume->reset();
}
template< typename T >
DynaFuImpl<T>::~DynaFuImpl()
{ }
template< typename T >
const Params& DynaFuImpl<T>::getParams() const
{
return params;
}
template< typename T >
const Affine3f DynaFuImpl<T>::getPose() const
{
return pose;
}
template<>
bool DynaFuImpl<Mat>::update(InputArray _depth)
{
CV_Assert(!_depth.empty() && _depth.size() == params.frameSize);
Mat depth;
if(_depth.isUMat())
{
_depth.copyTo(depth);
return updateT(depth);
}
else
{
return updateT(_depth.getMat());
}
}
template<>
bool DynaFuImpl<UMat>::update(InputArray _depth)
{
CV_TRACE_FUNCTION();
CV_Assert(!_depth.empty() && _depth.size() == params.frameSize);
UMat depth;
if(!_depth.isUMat())
{
_depth.copyTo(depth);
return updateT(depth);
}
else
{
return updateT(_depth.getUMat());
}
}
template< typename T >
bool DynaFuImpl<T>::updateT(const T& _depth)
{
CV_TRACE_FUNCTION();
T depth;
if(_depth.type() != DEPTH_TYPE)
_depth.convertTo(depth, DEPTH_TYPE);
else
depth = _depth;
std::vector<T> newPoints, newNormals;
makeFrameFromDepth(depth, newPoints, newNormals, params.intr,
params.pyramidLevels,
params.depthFactor,
params.bilateral_sigma_depth,
params.bilateral_sigma_spatial,
params.bilateral_kernel_size,
params.truncateThreshold);
if(frameCounter == 0)
{
// use depth instead of distance
volume->integrate(depth, params.depthFactor, pose, params.intr, makePtr<WarpField>(warpfield));
pyrPoints = newPoints;
pyrNormals = newNormals;
warpfield.setAllRT(Affine3f::Identity());
}
else
{
UMat wfPoints;
volume->fetchPointsNormals(wfPoints, noArray(), true);
warpfield.updateNodesFromPoints(wfPoints);
Mat _depthRender, estdDepth, _vertRender, _normRender;
renderSurface(_depthRender, _vertRender, _normRender, false);
_depthRender.convertTo(estdDepth, DEPTH_TYPE);
std::vector<T> estdPoints, estdNormals;
makeFrameFromDepth(estdDepth, estdPoints, estdNormals, params.intr,
params.pyramidLevels,
1.f,
params.bilateral_sigma_depth,
params.bilateral_sigma_spatial,
params.bilateral_kernel_size,
params.truncateThreshold);
pyrPoints = estdPoints;
pyrNormals = estdNormals;
Affine3f affine;
bool success = icp->estimateTransform(affine, pyrPoints, pyrNormals, newPoints, newNormals);
if(!success)
return false;
pose = pose * affine;
for(int iter = 0; iter < 1; iter++)
{
renderSurface(_depthRender, _vertRender, _normRender);
_depthRender.convertTo(estdDepth, DEPTH_TYPE);
makeFrameFromDepth(estdDepth, estdPoints, estdNormals, params.intr,
params.pyramidLevels,
1.f,
params.bilateral_sigma_depth,
params.bilateral_sigma_spatial,
params.bilateral_kernel_size,
params.truncateThreshold);
success = dynafuICP->estimateWarpNodes(warpfield, pose, _vertRender, estdPoints[0],
estdNormals[0],
newPoints[0], newNormals[0]);
if(!success)
return false;
}
float rnorm = (float)cv::norm(affine.rvec());
float tnorm = (float)cv::norm(affine.translation());
// We do not integrate volume if camera does not move
if((rnorm + tnorm)/2 >= params.tsdf_min_camera_movement)
{
// use depth instead of distance
volume->integrate(depth, params.depthFactor, pose, params.intr, makePtr<WarpField>(warpfield));
}
}
std::cout << "Frame# " << frameCounter++ << std::endl;
return true;
}
template< typename T >
void DynaFuImpl<T>::render(OutputArray image, const Matx44f& _cameraPose) const
{
CV_TRACE_FUNCTION();
Affine3f cameraPose(_cameraPose);
const Affine3f id = Affine3f::Identity();
if((cameraPose.rotation() == pose.rotation() && cameraPose.translation() == pose.translation()) ||
(cameraPose.rotation() == id.rotation() && cameraPose.translation() == id.translation()))
{
renderPointsNormals(pyrPoints[0], pyrNormals[0], image, params.lightPose);
}
else
{
T points, normals;
volume->raycast(cameraPose, params.intr, params.frameSize, points, normals);
renderPointsNormals(points, normals, image, params.lightPose);
}
}
template< typename T >
void DynaFuImpl<T>::getCloud(OutputArray p, OutputArray n) const
{
volume->fetchPointsNormals(p, n);
}
template< typename T >
void DynaFuImpl<T>::getPoints(OutputArray points) const
{
volume->fetchPointsNormals(points, noArray());
}
template< typename T >
void DynaFuImpl<T>::getNormals(InputArray points, OutputArray normals) const
{
volume->fetchNormals(points, normals);
}
template< typename T >
void DynaFuImpl<T>::marchCubes(OutputArray vertices, OutputArray edges) const
{
volume->marchCubes(vertices, edges);
}
template<typename T>
void DynaFuImpl<T>::renderSurface(OutputArray depthImage, OutputArray vertImage, OutputArray normImage, bool warp)
{
#ifdef HAVE_OPENGL
Mat _vertices, vertices, normals, meshIdx;
volume->marchCubes(_vertices, noArray());
if(_vertices.empty()) return;
_vertices.convertTo(vertices, POINT_TYPE);
getNormals(vertices, normals);
Mat warpedVerts(vertices.size(), vertices.type());
Affine3f invCamPose(pose.inv());
for(int i = 0; i < vertices.size().height; i++) {
ptype v = vertices.at<ptype>(i);
// transform vertex to RGB space
Point3f pVoxel = (params.volumePose.inv() * Point3f(v[0], v[1], v[2])) / params.voxelSize;
Point3f pGlobal = Point3f(pVoxel.x / params.volumeDims[0],
pVoxel.y / params.volumeDims[1],
pVoxel.z / params.volumeDims[2]);
vertices.at<ptype>(i) = ptype(pGlobal.x, pGlobal.y, pGlobal.z, 1.f);
// transform normals to RGB space
ptype n = normals.at<ptype>(i);
Point3f nGlobal = params.volumePose.rotation().inv() * Point3f(n[0], n[1], n[2]);
nGlobal.x = (nGlobal.x + 1)/2;
nGlobal.y = (nGlobal.y + 1)/2;
nGlobal.z = (nGlobal.z + 1)/2;
normals.at<ptype>(i) = ptype(nGlobal.x, nGlobal.y, nGlobal.z, 1.f);
//Point3f p = Point3f(v[0], v[1], v[2]);
if(!warp)
{
Point3f p(invCamPose * params.volumePose * (pVoxel*params.voxelSize));
warpedVerts.at<ptype>(i) = ptype(p.x, p.y, p.z, 1.f);
}
else
{
int numNeighbours = 0;
const nodeNeighboursType neighbours = volume->getVoxelNeighbours(pVoxel, numNeighbours);
Point3f p = (invCamPose * params.volumePose) * warpfield.applyWarp(pVoxel*params.voxelSize, neighbours, numNeighbours);
warpedVerts.at<ptype>(i) = ptype(p.x, p.y, p.z, 1.f);
}
}
for(int i = 0; i < vertices.size().height; i++)
meshIdx.push_back<int>(i);
arr.setVertexArray(warpedVerts);
arr.setColorArray(vertices);
idx.copyFrom(meshIdx);
drawScene(depthImage, vertImage);
arr.setVertexArray(warpedVerts);
arr.setColorArray(normals);
drawScene(noArray(), normImage);
#else
CV_UNUSED(depthImage);
CV_UNUSED(vertImage);
CV_UNUSED(normImage);
CV_UNUSED(warp);
NO_OGL_ERR;
#endif
}
// importing class
#ifdef OPENCV_ENABLE_NONFREE
Ptr<DynaFu> DynaFu::create(const Ptr<Params>& params)
{
return makePtr< DynaFuImpl<Mat> >(*params);
}
#else
Ptr<DynaFu> DynaFu::create(const Ptr<Params>& /*params*/)
{
CV_Error(Error::StsNotImplemented,
"This algorithm is patented and is excluded in this configuration; "
"Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library");
}
#endif
DynaFu::~DynaFu() {}
} // namespace dynafu
} // namespace cv
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#include "precomp.hpp"
#include "dynafu_tsdf.hpp"
#include "marchingcubes.hpp"
namespace cv {
namespace dynafu {
using namespace kinfu;
// TODO: Optimization possible:
// * volumeType can be FP16
// * weight can be int16
typedef float volumeType;
struct Voxel
{
volumeType v;
float weight;
nodeNeighboursType neighbours;
float neighbourDists[DYNAFU_MAX_NEIGHBOURS];
int n;
};
typedef Vec<uchar, sizeof(Voxel)> VecT;
class TSDFVolumeCPU : public TSDFVolume
{
public:
// dimension in voxels, size in meters
TSDFVolumeCPU(Point3i _res, float _voxelSize, cv::Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor, bool zFirstMemOrder = true);
virtual void integrate(InputArray _depth, float depthFactor, cv::Affine3f cameraPose, cv::kinfu::Intr intrinsics, Ptr<WarpField> wf) override;
virtual void raycast(cv::Affine3f cameraPose, cv::kinfu::Intr intrinsics, cv::Size frameSize,
cv::OutputArray points, cv::OutputArray normals) const override;
virtual void fetchNormals(cv::InputArray points, cv::OutputArray _normals) const override;
virtual void fetchPointsNormals(cv::OutputArray points, cv::OutputArray normals, bool fetchVoxels) const override;
virtual void marchCubes(OutputArray _vertices, OutputArray _edges) const override;
virtual void reset() override;
volumeType interpolateVoxel(cv::Point3f p) const;
Point3f getNormalVoxel(cv::Point3f p) const;
nodeNeighboursType const& getVoxelNeighbours(Point3i coords, int& n) const override;
// See zFirstMemOrder arg of parent class constructor
// for the array layout info
// Consist of Voxel elements
Mat volume;
private:
Point3f interpolate(Point3f p1, Point3f p2, float v1, float v2) const;
};
TSDFVolume::TSDFVolume(Point3i _res, float _voxelSize, Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor, bool zFirstMemOrder) :
voxelSize(_voxelSize),
voxelSizeInv(1.f/_voxelSize),
volResolution(_res),
maxWeight((float)_maxWeight),
pose(_pose),
raycastStepFactor(_raycastStepFactor)
{
// Unlike original code, this should work with any volume size
// Not only when (x,y,z % 32) == 0
volSize = Point3f(volResolution) * voxelSize;
truncDist = std::max(_truncDist, 2.1f * voxelSize);
// (xRes*yRes*zRes) array
// Depending on zFirstMemOrder arg:
// &elem(x, y, z) = data + x*zRes*yRes + y*zRes + z;
// &elem(x, y, z) = data + x + y*xRes + z*xRes*yRes;
int xdim, ydim, zdim;
if(zFirstMemOrder)
{
xdim = volResolution.z * volResolution.y;
ydim = volResolution.z;
zdim = 1;
}
else
{
xdim = 1;
ydim = volResolution.x;
zdim = volResolution.x * volResolution.y;
}
volDims = Vec4i(xdim, ydim, zdim);
neighbourCoords = Vec8i(
volDims.dot(Vec4i(0, 0, 0)),
volDims.dot(Vec4i(0, 0, 1)),
volDims.dot(Vec4i(0, 1, 0)),
volDims.dot(Vec4i(0, 1, 1)),
volDims.dot(Vec4i(1, 0, 0)),
volDims.dot(Vec4i(1, 0, 1)),
volDims.dot(Vec4i(1, 1, 0)),
volDims.dot(Vec4i(1, 1, 1))
);
}
// dimension in voxels, size in meters
TSDFVolumeCPU::TSDFVolumeCPU(Point3i _res, float _voxelSize, cv::Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor, bool zFirstMemOrder) :
TSDFVolume(_res, _voxelSize, _pose, _truncDist, _maxWeight, _raycastStepFactor, zFirstMemOrder)
{
volume = Mat(1, volResolution.x * volResolution.y * volResolution.z, rawType<Voxel>());
reset();
}
// zero volume, leave rest params the same
void TSDFVolumeCPU::reset()
{
CV_TRACE_FUNCTION();
volume.forEach<VecT>([](VecT& vv, const int* /* position */)
{
Voxel& v = reinterpret_cast<Voxel&>(vv);
v.v = 0; v.weight = 0;
});
}
static const bool fixMissingData = false;
static inline depthType bilinearDepth(const Depth& m, cv::Point2f pt)
{
const depthType defaultValue = qnan;
if(pt.x < 0 || pt.x >= m.cols-1 ||
pt.y < 0 || pt.y >= m.rows-1)
return defaultValue;
int xi = cvFloor(pt.x), yi = cvFloor(pt.y);
const depthType* row0 = m[yi+0];
const depthType* row1 = m[yi+1];
depthType v00 = row0[xi+0];
depthType v01 = row0[xi+1];
depthType v10 = row1[xi+0];
depthType v11 = row1[xi+1];
// assume correct depth is positive
bool b00 = v00 > 0;
bool b01 = v01 > 0;
bool b10 = v10 > 0;
bool b11 = v11 > 0;
if(!fixMissingData)
{
if(!(b00 && b01 && b10 && b11))
return defaultValue;
else
{
float tx = pt.x - xi, ty = pt.y - yi;
depthType v0 = v00 + tx*(v01 - v00);
depthType v1 = v10 + tx*(v11 - v10);
return v0 + ty*(v1 - v0);
}
}
else
{
int nz = b00 + b01 + b10 + b11;
if(nz == 0)
{
return defaultValue;
}
if(nz == 1)
{
if(b00) return v00;
if(b01) return v01;
if(b10) return v10;
if(b11) return v11;
}
else if(nz == 2)
{
if(b00 && b10) v01 = v00, v11 = v10;
if(b01 && b11) v00 = v01, v10 = v11;
if(b00 && b01) v10 = v00, v11 = v01;
if(b10 && b11) v00 = v10, v01 = v11;
if(b00 && b11) v01 = v10 = (v00 + v11)*0.5f;
if(b01 && b10) v00 = v11 = (v01 + v10)*0.5f;
}
else if(nz == 3)
{
if(!b00) v00 = v10 + v01 - v11;
if(!b01) v01 = v00 + v11 - v10;
if(!b10) v10 = v00 + v11 - v01;
if(!b11) v11 = v01 + v10 - v00;
}
float tx = pt.x - xi, ty = pt.y - yi;
depthType v0 = v00 + tx*(v01 - v00);
depthType v1 = v10 + tx*(v11 - v10);
return v0 + ty*(v1 - v0);
}
}
struct IntegrateInvoker : ParallelLoopBody
{
IntegrateInvoker(TSDFVolumeCPU& _volume, const Depth& _depth, Intr intrinsics, cv::Affine3f cameraPose,
float depthFactor, Ptr<WarpField> wf) :
ParallelLoopBody(),
volume(_volume),
depth(_depth),
proj(intrinsics.makeProjector()),
vol2cam(cameraPose.inv() * _volume.pose),
truncDistInv(1.f/_volume.truncDist),
dfac(1.f/depthFactor),
warpfield(wf)
{
volDataStart = volume.volume.ptr<Voxel>();
}
virtual void operator() (const Range& range) const override
{
CV_TRACE_FUNCTION();
for(int x = range.start; x < range.end; x++)
{
Voxel* volDataX = volDataStart + x*volume.volDims[0];
for(int y = 0; y < volume.volResolution.y; y++)
{
Voxel* volDataY = volDataX+y*volume.volDims[1];
for(int z = 0; z < volume.volResolution.z; z++)
{
Voxel& voxel = volDataY[z*volume.volDims[2]];
Point3f volPt = Point3f((float)x, (float)y, (float)z)*volume.voxelSize;
if(warpfield->getNodeIndex())
{
std::vector<int> indices(warpfield->k);
std::vector<float> dists(warpfield->k);
warpfield->findNeighbours(volPt, indices, dists);
voxel.n = 0;
for(size_t i = 0; i < indices.size(); i++)
{
if(std::isnan(dists[i])) continue;
voxel.neighbourDists[voxel.n] = dists[i];
voxel.neighbours[voxel.n++] = indices[i];
}
}
Point3f camSpacePt =
vol2cam * warpfield->applyWarp(volPt, voxel.neighbours, voxel.n);
if(camSpacePt.z <= 0)
continue;
Point3f camPixVec;
Point2f projected = proj(camSpacePt, camPixVec);
depthType v = bilinearDepth(depth, projected);
if(v == 0)
continue;
// norm(camPixVec) produces double which is too slow
float pixNorm = sqrt(camPixVec.dot(camPixVec));
// difference between distances of point and of surface to camera
volumeType sdf = pixNorm*(v*dfac - camSpacePt.z);
// possible alternative is:
// kftype sdf = norm(camSpacePt)*(v*dfac/camSpacePt.z - 1);
if(sdf >= -volume.truncDist)
{
volumeType tsdf = fmin(1.f, sdf * truncDistInv);
float& weight = voxel.weight;
volumeType& value = voxel.v;
// update TSDF
float newWeight = 0;
if(warpfield->getNodesLen() >= (size_t)warpfield->k)
{
for(int i = 0; i < voxel.n; i++)
newWeight += sqrt(voxel.neighbourDists[i]);
if(voxel.n > 0) newWeight /= voxel.n;
} else newWeight = 1.f;
if((weight + newWeight) != 0)
{
value = (value*weight+tsdf*newWeight) / (weight+newWeight);
weight = min(weight+newWeight, volume.maxWeight);
}
}
}
}
}
}
TSDFVolumeCPU& volume;
const Depth& depth;
const Intr::Projector proj;
const cv::Affine3f vol2cam;
const float truncDistInv;
const float dfac;
Voxel* volDataStart;
Ptr<WarpField> warpfield;
};
// use depth instead of distance (optimization)
void TSDFVolumeCPU::integrate(InputArray _depth, float depthFactor, cv::Affine3f cameraPose, Intr intrinsics, Ptr<WarpField> wf)
{
CV_TRACE_FUNCTION();
CV_Assert(_depth.type() == DEPTH_TYPE);
Depth depth = _depth.getMat();
IntegrateInvoker ii(*this, depth, intrinsics, cameraPose, depthFactor, wf);
Range range(0, volResolution.x);
parallel_for_(range, ii);
}
inline volumeType TSDFVolumeCPU::interpolateVoxel(Point3f p) const
{
int xdim = volDims[0], ydim = volDims[1], zdim = volDims[2];
int ix = cvFloor(p.x);
int iy = cvFloor(p.y);
int iz = cvFloor(p.z);
float tx = p.x - ix;
float ty = p.y - iy;
float tz = p.z - iz;
int coordBase = ix*xdim + iy*ydim + iz*zdim;
const Voxel* volData = volume.ptr<Voxel>();
volumeType vx[8];
for(int i = 0; i < 8; i++)
vx[i] = volData[neighbourCoords[i] + coordBase].v;
volumeType v00 = vx[0] + tz*(vx[1] - vx[0]);
volumeType v01 = vx[2] + tz*(vx[3] - vx[2]);
volumeType v10 = vx[4] + tz*(vx[5] - vx[4]);
volumeType v11 = vx[6] + tz*(vx[7] - vx[6]);
volumeType v0 = v00 + ty*(v01 - v00);
volumeType v1 = v10 + ty*(v11 - v10);
return v0 + tx*(v1 - v0);
}
inline Point3f TSDFVolumeCPU::getNormalVoxel(Point3f p) const
{
const int xdim = volDims[0], ydim = volDims[1], zdim = volDims[2];
const Voxel* volData = volume.ptr<Voxel>();
if(p.x < 1 || p.x >= volResolution.x - 2 ||
p.y < 1 || p.y >= volResolution.y - 2 ||
p.z < 1 || p.z >= volResolution.z - 2)
return nan3;
int ix = cvFloor(p.x);
int iy = cvFloor(p.y);
int iz = cvFloor(p.z);
float tx = p.x - ix;
float ty = p.y - iy;
float tz = p.z - iz;
int coordBase = ix*xdim + iy*ydim + iz*zdim;
Vec3f an;
for(int c = 0; c < 3; c++)
{
const int dim = volDims[c];
float& nv = an[c];
volumeType vx[8];
for(int i = 0; i < 8; i++)
vx[i] = volData[neighbourCoords[i] + coordBase + 1*dim].v -
volData[neighbourCoords[i] + coordBase - 1*dim].v;
volumeType v00 = vx[0] + tz*(vx[1] - vx[0]);
volumeType v01 = vx[2] + tz*(vx[3] - vx[2]);
volumeType v10 = vx[4] + tz*(vx[5] - vx[4]);
volumeType v11 = vx[6] + tz*(vx[7] - vx[6]);
volumeType v0 = v00 + ty*(v01 - v00);
volumeType v1 = v10 + ty*(v11 - v10);
nv = v0 + tx*(v1 - v0);
}
return normalize(an);
}
struct RaycastInvoker : ParallelLoopBody
{
RaycastInvoker(Points& _points, Normals& _normals, Affine3f cameraPose,
Intr intrinsics, const TSDFVolumeCPU& _volume) :
ParallelLoopBody(),
points(_points),
normals(_normals),
volume(_volume),
tstep(volume.truncDist * volume.raycastStepFactor),
// We do subtract voxel size to minimize checks after
// Note: origin of volume coordinate is placed
// in the center of voxel (0,0,0), not in the corner of the voxel!
boxMax(volume.volSize - Point3f(volume.voxelSize,
volume.voxelSize,
volume.voxelSize)),
boxMin(),
cam2vol(volume.pose.inv() * cameraPose),
vol2cam(cameraPose.inv() * volume.pose),
reproj(intrinsics.makeReprojector())
{ }
virtual void operator() (const Range& range) const override
{
const Point3f camTrans = cam2vol.translation();
const Matx33f camRot = cam2vol.rotation();
const Matx33f volRot = vol2cam.rotation();
for(int y = range.start; y < range.end; y++)
{
ptype* ptsRow = points[y];
ptype* nrmRow = normals[y];
for(int x = 0; x < points.cols; x++)
{
Point3f point = nan3, normal = nan3;
Point3f orig = camTrans;
// direction through pixel in volume space
Point3f dir = normalize(Vec3f(camRot * reproj(Point3f((float)x, (float)y, 1.f))));
// compute intersection of ray with all six bbox planes
Vec3f rayinv(1.f/dir.x, 1.f/dir.y, 1.f/dir.z);
Point3f tbottom = rayinv.mul(boxMin - orig);
Point3f ttop = rayinv.mul(boxMax - orig);
// re-order intersections to find smallest and largest on each axis
Point3f minAx(min(ttop.x, tbottom.x), min(ttop.y, tbottom.y), min(ttop.z, tbottom.z));
Point3f maxAx(max(ttop.x, tbottom.x), max(ttop.y, tbottom.y), max(ttop.z, tbottom.z));
// near clipping plane
const float clip = 0.f;
float tmin = max(max(max(minAx.x, minAx.y), max(minAx.x, minAx.z)), clip);
float tmax = min(min(maxAx.x, maxAx.y), min(maxAx.x, maxAx.z));
// precautions against getting coordinates out of bounds
tmin = tmin + tstep;
tmax = tmax - tstep;
if(tmin < tmax)
{
// interpolation optimized a little
orig = orig*volume.voxelSizeInv;
dir = dir*volume.voxelSizeInv;
Point3f rayStep = dir * tstep;
Point3f next = (orig + dir * tmin);
volumeType f = volume.interpolateVoxel(next), fnext = f;
//raymarch
int steps = 0;
int nSteps = (int)floor((tmax - tmin)/tstep);
for(; steps < nSteps; steps++)
{
next += rayStep;
int xdim = volume.volDims[0];
int ydim = volume.volDims[1];
int zdim = volume.volDims[2];
int ix = cvRound(next.x);
int iy = cvRound(next.y);
int iz = cvRound(next.z);
fnext = volume.volume.at<Voxel>(ix*xdim + iy*ydim + iz*zdim).v;
if(fnext != f)
{
fnext = volume.interpolateVoxel(next);
// when ray crosses a surface
if(std::signbit(f) != std::signbit(fnext))
break;
f = fnext;
}
}
// if ray penetrates a surface from outside
// linearly interpolate t between two f values
if(f > 0.f && fnext < 0.f)
{
Point3f tp = next - rayStep;
volumeType ft = volume.interpolateVoxel(tp);
volumeType ftdt = volume.interpolateVoxel(next);
// float t = tmin + steps*tstep;
// float ts = t - tstep*ft/(ftdt - ft);
float ts = tmin + tstep*(steps - ft/(ftdt - ft));
// avoid division by zero
if(!cvIsNaN(ts) && !cvIsInf(ts))
{
Point3f pv = (orig + dir*ts);
Point3f nv = volume.getNormalVoxel(pv);
if(!isNaN(nv))
{
//convert pv and nv to camera space
normal = volRot * nv;
// interpolation optimized a little
point = vol2cam * (pv*volume.voxelSize);
}
}
}
}
ptsRow[x] = toPtype(point);
nrmRow[x] = toPtype(normal);
}
}
}
Points& points;
Normals& normals;
const TSDFVolumeCPU& volume;
const float tstep;
const Point3f boxMax;
const Point3f boxMin;
const Affine3f cam2vol;
const Affine3f vol2cam;
const Intr::Reprojector reproj;
};
void TSDFVolumeCPU::raycast(cv::Affine3f cameraPose, Intr intrinsics, Size frameSize,
cv::OutputArray _points, cv::OutputArray _normals) const
{
CV_TRACE_FUNCTION();
CV_Assert(frameSize.area() > 0);
_points.create (frameSize, POINT_TYPE);
_normals.create(frameSize, POINT_TYPE);
Points points = _points.getMat();
Normals normals = _normals.getMat();
RaycastInvoker ri(points, normals, cameraPose, intrinsics, *this);
const int nstripes = -1;
parallel_for_(Range(0, points.rows), ri, nstripes);
}
struct FetchPointsNormalsInvoker : ParallelLoopBody
{
FetchPointsNormalsInvoker(const TSDFVolumeCPU& _volume,
std::vector< std::vector<ptype> >& _pVecs,
std::vector< std::vector<ptype> >& _nVecs,
bool _needNormals, bool _fetchVoxels) :
ParallelLoopBody(),
vol(_volume),
pVecs(_pVecs),
nVecs(_nVecs),
needNormals(_needNormals),
fetchVoxels(_fetchVoxels)
{
volDataStart = vol.volume.ptr<Voxel>();
}
inline void coord(std::vector<ptype>& points, std::vector<ptype>& normals,
int x, int y, int z, Point3f V, float v0, int axis) const
{
// 0 for x, 1 for y, 2 for z
bool limits = false;
Point3i shift;
float Vc = 0.f;
if(axis == 0)
{
shift = Point3i(1, 0, 0);
limits = (x + 1 < vol.volResolution.x);
Vc = V.x;
}
if(axis == 1)
{
shift = Point3i(0, 1, 0);
limits = (y + 1 < vol.volResolution.y);
Vc = V.y;
}
if(axis == 2)
{
shift = Point3i(0, 0, 1);
limits = (z + 1 < vol.volResolution.z);
Vc = V.z;
}
if(limits)
{
const Voxel& voxeld = volDataStart[(x+shift.x)*vol.volDims[0] +
(y+shift.y)*vol.volDims[1] +
(z+shift.z)*vol.volDims[2]];
volumeType vd = voxeld.v;
if(voxeld.weight != 0 && vd != 1.f)
{
if((v0 > 0 && vd < 0) || (v0 < 0 && vd > 0))
{
//linearly interpolate coordinate
float Vn = Vc + vol.voxelSize;
float dinv = 1.f/(abs(v0)+abs(vd));
float inter = (Vc*abs(vd) + Vn*abs(v0))*dinv;
Point3f p(shift.x ? inter : V.x,
shift.y ? inter : V.y,
shift.z ? inter : V.z);
{
if(fetchVoxels)
{
points.push_back(toPtype(p));
if(needNormals)
normals.push_back(toPtype(vol.getNormalVoxel(p*vol.voxelSizeInv)));
} else {
points.push_back(toPtype(vol.pose * p));
if(needNormals)
normals.push_back(toPtype(vol.pose.rotation() *
vol.getNormalVoxel(p*vol.voxelSizeInv)));
}
}
}
}
}
}
virtual void operator() (const Range& range) const override
{
std::vector<ptype> points, normals;
for(int x = range.start; x < range.end; x++)
{
const Voxel* volDataX = volDataStart + x*vol.volDims[0];
for(int y = 0; y < vol.volResolution.y; y++)
{
const Voxel* volDataY = volDataX + y*vol.volDims[1];
for(int z = 0; z < vol.volResolution.z; z++)
{
const Voxel& voxel0 = volDataY[z*vol.volDims[2]];
volumeType v0 = voxel0.v;
if(voxel0.weight != 0 && v0 != 1.f)
{
Point3f V(Point3f((float)x + 0.5f, (float)y + 0.5f, (float)z + 0.5f)*vol.voxelSize);
coord(points, normals, x, y, z, V, v0, 0);
coord(points, normals, x, y, z, V, v0, 1);
coord(points, normals, x, y, z, V, v0, 2);
} // if voxel is not empty
}
}
}
AutoLock al(mutex);
pVecs.push_back(points);
nVecs.push_back(normals);
}
const TSDFVolumeCPU& vol;
std::vector< std::vector<ptype> >& pVecs;
std::vector< std::vector<ptype> >& nVecs;
const Voxel* volDataStart;
bool needNormals;
bool fetchVoxels;
mutable Mutex mutex;
};
void TSDFVolumeCPU::fetchPointsNormals(OutputArray _points, OutputArray _normals, bool fetchVoxels) const
{
CV_TRACE_FUNCTION();
if(_points.needed())
{
std::vector< std::vector<ptype> > pVecs, nVecs;
FetchPointsNormalsInvoker fi(*this, pVecs, nVecs, _normals.needed(), fetchVoxels);
Range range(0, volResolution.x);
const int nstripes = -1;
parallel_for_(range, fi, nstripes);
std::vector<ptype> points, normals;
for(size_t i = 0; i < pVecs.size(); i++)
{
points.insert(points.end(), pVecs[i].begin(), pVecs[i].end());
normals.insert(normals.end(), nVecs[i].begin(), nVecs[i].end());
}
_points.create((int)points.size(), 1, POINT_TYPE);
if(!points.empty())
Mat((int)points.size(), 1, POINT_TYPE, &points[0]).copyTo(_points.getMat());
if(_normals.needed())
{
_normals.create((int)normals.size(), 1, POINT_TYPE);
if(!normals.empty())
Mat((int)normals.size(), 1, POINT_TYPE, &normals[0]).copyTo(_normals.getMat());
}
}
}
struct PushNormals
{
PushNormals(const TSDFVolumeCPU& _vol, Mat_<ptype>& _nrm) :
vol(_vol), normals(_nrm), invPose(vol.pose.inv())
{ }
void operator ()(const ptype &pp, const int * position) const
{
Point3f p = fromPtype(pp);
Point3f n = nan3;
if(!isNaN(p))
{
Point3f voxPt = (invPose * p);
voxPt = voxPt * vol.voxelSizeInv;
n = vol.pose.rotation() * vol.getNormalVoxel(voxPt);
}
normals(position[0], position[1]) = toPtype(n);
}
const TSDFVolumeCPU& vol;
Mat_<ptype>& normals;
Affine3f invPose;
};
void TSDFVolumeCPU::fetchNormals(InputArray _points, OutputArray _normals) const
{
CV_TRACE_FUNCTION();
if(_normals.needed())
{
Points points = _points.getMat();
CV_Assert(points.type() == POINT_TYPE);
_normals.createSameSize(_points, _points.type());
Mat_<ptype> normals = _normals.getMat();
points.forEach(PushNormals(*this, normals));
}
}
struct MarchCubesInvoker : ParallelLoopBody
{
MarchCubesInvoker(const TSDFVolumeCPU& _volume,
std::vector<Vec4f>& _meshPoints) :
volume(_volume),
meshPoints(_meshPoints),
mcNeighbourPts{
Point3f(0.f, 0.f, 0.f),
Point3f(0.f, 0.f, 1.f),
Point3f(0.f, 1.f, 1.f),
Point3f(0.f, 1.f, 0.f),
Point3f(1.f, 0.f, 0.f),
Point3f(1.f, 0.f, 1.f),
Point3f(1.f, 1.f, 1.f),
Point3f(1.f, 1.f, 0.f)},
mcNeighbourCoords(
Vec8i(
volume.volDims.dot(Vec4i(0, 0, 0)),
volume.volDims.dot(Vec4i(0, 0, 1)),
volume.volDims.dot(Vec4i(0, 1, 1)),
volume.volDims.dot(Vec4i(0, 1, 0)),
volume.volDims.dot(Vec4i(1, 0, 0)),
volume.volDims.dot(Vec4i(1, 0, 1)),
volume.volDims.dot(Vec4i(1, 1, 1)),
volume.volDims.dot(Vec4i(1, 1, 0))
))
{
volData = volume.volume.ptr<Voxel>();
}
Point3f interpolate(Point3f p1, Point3f p2, float v1, float v2) const
{
float dV = 0.5f;
if (abs(v1 - v2) > 0.0001f)
dV = v1 / (v1 - v2);
Point3f p = p1 + dV * (p2 - p1);
return p;
}
virtual void operator()(const Range &range) const override
{
std::vector<Vec4f> points;
for (int x = range.start; x < range.end; x++)
{
int coordBaseX = x * volume.volDims[0];
for (int y = 0; y < volume.volResolution.y - 1; y++)
{
int coordBaseY = coordBaseX + y * volume.volDims[1];
for (int z = 0; z < volume.volResolution.z - 1; z++)
{
int coordBase = coordBaseY + z * volume.volDims[2];
if (volData[coordBase].weight == 0)
continue;
uint8_t cubeIndex = 0;
float tsdfValues[8] = {0};
for (int i = 0; i < 8; i++)
{
if (volData[mcNeighbourCoords[i] + coordBase].weight == 0)
continue;
tsdfValues[i] = volData[mcNeighbourCoords[i] + coordBase].v;
if (tsdfValues[i] <= 0)
cubeIndex |= (1 << i);
}
if (edgeTable[cubeIndex] == 0)
continue;
Point3f vertices[12];
Point3f basePt((float)x, (float)y, (float)z);
if (edgeTable[cubeIndex] & 1)
vertices[0] = basePt + interpolate(mcNeighbourPts[0], mcNeighbourPts[1],
tsdfValues[0], tsdfValues[1]);
if (edgeTable[cubeIndex] & 2)
vertices[1] = basePt + interpolate(mcNeighbourPts[1], mcNeighbourPts[2],
tsdfValues[1], tsdfValues[2]);
if (edgeTable[cubeIndex] & 4)
vertices[2] = basePt + interpolate(mcNeighbourPts[2], mcNeighbourPts[3],
tsdfValues[2], tsdfValues[3]);
if (edgeTable[cubeIndex] & 8)
vertices[3] = basePt + interpolate(mcNeighbourPts[3], mcNeighbourPts[0],
tsdfValues[3], tsdfValues[0]);
if (edgeTable[cubeIndex] & 16)
vertices[4] = basePt + interpolate(mcNeighbourPts[4], mcNeighbourPts[5],
tsdfValues[4], tsdfValues[5]);
if (edgeTable[cubeIndex] & 32)
vertices[5] = basePt + interpolate(mcNeighbourPts[5], mcNeighbourPts[6],
tsdfValues[5], tsdfValues[6]);
if (edgeTable[cubeIndex] & 64)
vertices[6] = basePt + interpolate(mcNeighbourPts[6], mcNeighbourPts[7],
tsdfValues[6], tsdfValues[7]);
if (edgeTable[cubeIndex] & 128)
vertices[7] = basePt + interpolate(mcNeighbourPts[7], mcNeighbourPts[4],
tsdfValues[7], tsdfValues[4]);
if (edgeTable[cubeIndex] & 256)
vertices[8] = basePt + interpolate(mcNeighbourPts[0], mcNeighbourPts[4],
tsdfValues[0], tsdfValues[4]);
if (edgeTable[cubeIndex] & 512)
vertices[9] = basePt + interpolate(mcNeighbourPts[1], mcNeighbourPts[5],
tsdfValues[1], tsdfValues[5]);
if (edgeTable[cubeIndex] & 1024)
vertices[10] = basePt + interpolate(mcNeighbourPts[2], mcNeighbourPts[6],
tsdfValues[2], tsdfValues[6]);
if (edgeTable[cubeIndex] & 2048)
vertices[11] = basePt + interpolate(mcNeighbourPts[3], mcNeighbourPts[7],
tsdfValues[3], tsdfValues[7]);
for (int i = 0; triTable[cubeIndex][i] != -1; i += 3)
{
Point3f p = volume.pose * (vertices[triTable[cubeIndex][i]] * volume.voxelSize);
points.push_back(Vec4f(p.x, p.y, p.z, 1.f));
p = volume.pose * (vertices[triTable[cubeIndex][i + 1]] * volume.voxelSize);
points.push_back(Vec4f(p.x, p.y, p.z, 1.f));
p = volume.pose * (vertices[triTable[cubeIndex][i + 2]] * volume.voxelSize);
points.push_back(Vec4f(p.x, p.y, p.z, 1.f));
}
}
}
}
if(points.size() > 0)
{
AutoLock al(m);
meshPoints.insert(meshPoints.end(), points.begin(), points.end());
}
}
const TSDFVolumeCPU& volume;
std::vector<Vec4f>& meshPoints;
const Point3f mcNeighbourPts[8];
const Vec8i mcNeighbourCoords;
const Voxel* volData;
mutable Mutex m;
};
void TSDFVolumeCPU::marchCubes(OutputArray _vertices, OutputArray _edges) const
{
std::vector<Vec4f> meshPoints;
std::vector<int> meshEdges;
MarchCubesInvoker mci(*this, meshPoints);
Range range(0, volResolution.x - 1);
parallel_for_(range, mci);
for(int i = 0; i < (int)meshPoints.size(); i+= 3)
{
meshEdges.push_back(i);
meshEdges.push_back(i+1);
meshEdges.push_back(i+1);
meshEdges.push_back(i+2);
meshEdges.push_back(i+2);
meshEdges.push_back(i);
}
if (_vertices.needed())
Mat((int)meshPoints.size(), 1, CV_32FC4, &meshPoints[0]).copyTo(_vertices);
if (_edges.needed())
Mat((int)meshPoints.size(), 2, CV_32S, &meshEdges[0]).copyTo(_edges);
}
nodeNeighboursType const& TSDFVolumeCPU::getVoxelNeighbours(Point3i v, int& n) const
{
int baseX = v.x * volDims[0];
int baseY = baseX + v.y * volDims[1];
int base = baseY + v.z * volDims[2];
const Voxel *vox = volume.ptr<Voxel>()+base;
n = vox->n;
return vox->neighbours;
}
cv::Ptr<TSDFVolume> makeTSDFVolume(Point3i _res, float _voxelSize, cv::Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor)
{
return cv::makePtr<TSDFVolumeCPU>(_res, _voxelSize, _pose, _truncDist, _maxWeight, _raycastStepFactor);
}
} // namespace dynafu
} // namespace cv
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
#ifndef __OPENCV_DYNAFU_TSDF_H__
#define __OPENCV_DYNAFU_TSDF_H__
#include "kinfu_frame.hpp"
#include "warpfield.hpp"
namespace cv {
namespace dynafu {
class TSDFVolume
{
public:
// dimension in voxels, size in meters
TSDFVolume(Point3i _res, float _voxelSize, cv::Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor, bool zFirstMemOrder = true);
virtual void integrate(InputArray _depth, float depthFactor, cv::Affine3f cameraPose,
cv::kinfu::Intr intrinsics, Ptr<WarpField> wf) = 0;
virtual void raycast(cv::Affine3f cameraPose, cv::kinfu::Intr intrinsics, cv::Size frameSize,
cv::OutputArray points, cv::OutputArray normals) const = 0;
virtual void fetchPointsNormals(cv::OutputArray points, cv::OutputArray normals,
bool fetchVoxels=false) const = 0;
virtual void fetchNormals(cv::InputArray points, cv::OutputArray _normals) const = 0;
virtual void marchCubes(OutputArray _vertices, OutputArray _edges) const = 0;
virtual void reset() = 0;
virtual nodeNeighboursType const& getVoxelNeighbours(Point3i v, int& n) const = 0;
virtual ~TSDFVolume() { }
float voxelSize;
float voxelSizeInv;
Point3i volResolution;
float maxWeight;
cv::Affine3f pose;
float raycastStepFactor;
Point3f volSize;
float truncDist;
Vec4i volDims;
Vec8i neighbourCoords;
};
cv::Ptr<TSDFVolume> makeTSDFVolume(Point3i _res, float _voxelSize, cv::Affine3f _pose, float _truncDist, int _maxWeight,
float _raycastStepFactor);
} // namespace dynafu
} // namespace cv
#endif
#ifndef __OPENCV_DYNAFU_MARCHINGCUBES_H__
#define __OPENCV_DYNAFU_MARCHINGCUBES_H__
/*
These tables are originally generated by Cory Gene Bloyd
(http://paulbourke.net/geometry/polygonise) and are released
in the public domain
*/
namespace cv {
namespace dynafu {
// For any edge, if one vertex is inside of the surface and the other is outside of the surface
// then the edge intersects the surface
// For each of the 8 vertices of the cube can be two possible states : either inside or outside of the surface
// For any cube the are 2^8=256 possible sets of vertex states
// This table lists the edges intersected by the surface for all 256 possible vertex states
// There are 12 edges. For each entry in the table, if edge #n is intersected, then bit #n is set to 1
int edgeTable[256] =
{
0x000, 0x109, 0x203, 0x30a, 0x406, 0x50f, 0x605, 0x70c, 0x80c, 0x905, 0xa0f, 0xb06, 0xc0a, 0xd03, 0xe09, 0xf00,
0x190, 0x099, 0x393, 0x29a, 0x596, 0x49f, 0x795, 0x69c, 0x99c, 0x895, 0xb9f, 0xa96, 0xd9a, 0xc93, 0xf99, 0xe90,
0x230, 0x339, 0x033, 0x13a, 0x636, 0x73f, 0x435, 0x53c, 0xa3c, 0xb35, 0x83f, 0x936, 0xe3a, 0xf33, 0xc39, 0xd30,
0x3a0, 0x2a9, 0x1a3, 0x0aa, 0x7a6, 0x6af, 0x5a5, 0x4ac, 0xbac, 0xaa5, 0x9af, 0x8a6, 0xfaa, 0xea3, 0xda9, 0xca0,
0x460, 0x569, 0x663, 0x76a, 0x066, 0x16f, 0x265, 0x36c, 0xc6c, 0xd65, 0xe6f, 0xf66, 0x86a, 0x963, 0xa69, 0xb60,
0x5f0, 0x4f9, 0x7f3, 0x6fa, 0x1f6, 0x0ff, 0x3f5, 0x2fc, 0xdfc, 0xcf5, 0xfff, 0xef6, 0x9fa, 0x8f3, 0xbf9, 0xaf0,
0x650, 0x759, 0x453, 0x55a, 0x256, 0x35f, 0x055, 0x15c, 0xe5c, 0xf55, 0xc5f, 0xd56, 0xa5a, 0xb53, 0x859, 0x950,
0x7c0, 0x6c9, 0x5c3, 0x4ca, 0x3c6, 0x2cf, 0x1c5, 0x0cc, 0xfcc, 0xec5, 0xdcf, 0xcc6, 0xbca, 0xac3, 0x9c9, 0x8c0,
0x8c0, 0x9c9, 0xac3, 0xbca, 0xcc6, 0xdcf, 0xec5, 0xfcc, 0x0cc, 0x1c5, 0x2cf, 0x3c6, 0x4ca, 0x5c3, 0x6c9, 0x7c0,
0x950, 0x859, 0xb53, 0xa5a, 0xd56, 0xc5f, 0xf55, 0xe5c, 0x15c, 0x055, 0x35f, 0x256, 0x55a, 0x453, 0x759, 0x650,
0xaf0, 0xbf9, 0x8f3, 0x9fa, 0xef6, 0xfff, 0xcf5, 0xdfc, 0x2fc, 0x3f5, 0x0ff, 0x1f6, 0x6fa, 0x7f3, 0x4f9, 0x5f0,
0xb60, 0xa69, 0x963, 0x86a, 0xf66, 0xe6f, 0xd65, 0xc6c, 0x36c, 0x265, 0x16f, 0x066, 0x76a, 0x663, 0x569, 0x460,
0xca0, 0xda9, 0xea3, 0xfaa, 0x8a6, 0x9af, 0xaa5, 0xbac, 0x4ac, 0x5a5, 0x6af, 0x7a6, 0x0aa, 0x1a3, 0x2a9, 0x3a0,
0xd30, 0xc39, 0xf33, 0xe3a, 0x936, 0x83f, 0xb35, 0xa3c, 0x53c, 0x435, 0x73f, 0x636, 0x13a, 0x033, 0x339, 0x230,
0xe90, 0xf99, 0xc93, 0xd9a, 0xa96, 0xb9f, 0x895, 0x99c, 0x69c, 0x795, 0x49f, 0x596, 0x29a, 0x393, 0x099, 0x190,
0xf00, 0xe09, 0xd03, 0xc0a, 0xb06, 0xa0f, 0x905, 0x80c, 0x70c, 0x605, 0x50f, 0x406, 0x30a, 0x203, 0x109, 0x000};
// For each of the possible vertex states listed in aiCubeEdgeFlags there is a specific triangulation
// of the edge intersection points. a2iTriangleConnectionTable lists all of them in the form of
// 0-5 edge triples with the list terminated by the invalid value -1.
// For example: a2iTriangleConnectionTable[3] list the 2 triangles formed when corner[0]
// and corner[1] are inside of the surface, but the rest of the cube is not.
int triTable[256][16] =
{
{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 1, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 8, 3, 9, 8, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, 1, 2, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 2, 10, 0, 2, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{2, 8, 3, 2, 10, 8, 10, 9, 8, -1, -1, -1, -1, -1, -1, -1},
{3, 11, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 11, 2, 8, 11, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 9, 0, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 11, 2, 1, 9, 11, 9, 8, 11, -1, -1, -1, -1, -1, -1, -1},
{3, 10, 1, 11, 10, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 10, 1, 0, 8, 10, 8, 11, 10, -1, -1, -1, -1, -1, -1, -1},
{3, 9, 0, 3, 11, 9, 11, 10, 9, -1, -1, -1, -1, -1, -1, -1},
{9, 8, 10, 10, 8, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 3, 0, 7, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 1, 9, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 1, 9, 4, 7, 1, 7, 3, 1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 4, 7, 3, 0, 4, 1, 2, 10, -1, -1, -1, -1, -1, -1, -1},
{9, 2, 10, 9, 0, 2, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1},
{2, 10, 9, 2, 9, 7, 2, 7, 3, 7, 9, 4, -1, -1, -1, -1},
{8, 4, 7, 3, 11, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{11, 4, 7, 11, 2, 4, 2, 0, 4, -1, -1, -1, -1, -1, -1, -1},
{9, 0, 1, 8, 4, 7, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1},
{4, 7, 11, 9, 4, 11, 9, 11, 2, 9, 2, 1, -1, -1, -1, -1},
{3, 10, 1, 3, 11, 10, 7, 8, 4, -1, -1, -1, -1, -1, -1, -1},
{1, 11, 10, 1, 4, 11, 1, 0, 4, 7, 11, 4, -1, -1, -1, -1},
{4, 7, 8, 9, 0, 11, 9, 11, 10, 11, 0, 3, -1, -1, -1, -1},
{4, 7, 11, 4, 11, 9, 9, 11, 10, -1, -1, -1, -1, -1, -1, -1},
{9, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 5, 4, 0, 8, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 5, 4, 1, 5, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{8, 5, 4, 8, 3, 5, 3, 1, 5, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, 9, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 0, 8, 1, 2, 10, 4, 9, 5, -1, -1, -1, -1, -1, -1, -1},
{5, 2, 10, 5, 4, 2, 4, 0, 2, -1, -1, -1, -1, -1, -1, -1},
{2, 10, 5, 3, 2, 5, 3, 5, 4, 3, 4, 8, -1, -1, -1, -1},
{9, 5, 4, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 11, 2, 0, 8, 11, 4, 9, 5, -1, -1, -1, -1, -1, -1, -1},
{0, 5, 4, 0, 1, 5, 2, 3, 11, -1, -1, -1, -1, -1, -1, -1},
{2, 1, 5, 2, 5, 8, 2, 8, 11, 4, 8, 5, -1, -1, -1, -1},
{10, 3, 11, 10, 1, 3, 9, 5, 4, -1, -1, -1, -1, -1, -1, -1},
{4, 9, 5, 0, 8, 1, 8, 10, 1, 8, 11, 10, -1, -1, -1, -1},
{5, 4, 0, 5, 0, 11, 5, 11, 10, 11, 0, 3, -1, -1, -1, -1},
{5, 4, 8, 5, 8, 10, 10, 8, 11, -1, -1, -1, -1, -1, -1, -1},
{9, 7, 8, 5, 7, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 3, 0, 9, 5, 3, 5, 7, 3, -1, -1, -1, -1, -1, -1, -1},
{0, 7, 8, 0, 1, 7, 1, 5, 7, -1, -1, -1, -1, -1, -1, -1},
{1, 5, 3, 3, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 7, 8, 9, 5, 7, 10, 1, 2, -1, -1, -1, -1, -1, -1, -1},
{10, 1, 2, 9, 5, 0, 5, 3, 0, 5, 7, 3, -1, -1, -1, -1},
{8, 0, 2, 8, 2, 5, 8, 5, 7, 10, 5, 2, -1, -1, -1, -1},
{2, 10, 5, 2, 5, 3, 3, 5, 7, -1, -1, -1, -1, -1, -1, -1},
{7, 9, 5, 7, 8, 9, 3, 11, 2, -1, -1, -1, -1, -1, -1, -1},
{9, 5, 7, 9, 7, 2, 9, 2, 0, 2, 7, 11, -1, -1, -1, -1},
{2, 3, 11, 0, 1, 8, 1, 7, 8, 1, 5, 7, -1, -1, -1, -1},
{11, 2, 1, 11, 1, 7, 7, 1, 5, -1, -1, -1, -1, -1, -1, -1},
{9, 5, 8, 8, 5, 7, 10, 1, 3, 10, 3, 11, -1, -1, -1, -1},
{5, 7, 0, 5, 0, 9, 7, 11, 0, 1, 0, 10, 11, 10, 0, -1},
{11, 10, 0, 11, 0, 3, 10, 5, 0, 8, 0, 7, 5, 7, 0, -1},
{11, 10, 5, 7, 11, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{10, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 0, 1, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 8, 3, 1, 9, 8, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1},
{1, 6, 5, 2, 6, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 6, 5, 1, 2, 6, 3, 0, 8, -1, -1, -1, -1, -1, -1, -1},
{9, 6, 5, 9, 0, 6, 0, 2, 6, -1, -1, -1, -1, -1, -1, -1},
{5, 9, 8, 5, 8, 2, 5, 2, 6, 3, 2, 8, -1, -1, -1, -1},
{2, 3, 11, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{11, 0, 8, 11, 2, 0, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1},
{0, 1, 9, 2, 3, 11, 5, 10, 6, -1, -1, -1, -1, -1, -1, -1},
{5, 10, 6, 1, 9, 2, 9, 11, 2, 9, 8, 11, -1, -1, -1, -1},
{6, 3, 11, 6, 5, 3, 5, 1, 3, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 11, 0, 11, 5, 0, 5, 1, 5, 11, 6, -1, -1, -1, -1},
{3, 11, 6, 0, 3, 6, 0, 6, 5, 0, 5, 9, -1, -1, -1, -1},
{6, 5, 9, 6, 9, 11, 11, 9, 8, -1, -1, -1, -1, -1, -1, -1},
{5, 10, 6, 4, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 3, 0, 4, 7, 3, 6, 5, 10, -1, -1, -1, -1, -1, -1, -1},
{1, 9, 0, 5, 10, 6, 8, 4, 7, -1, -1, -1, -1, -1, -1, -1},
{10, 6, 5, 1, 9, 7, 1, 7, 3, 7, 9, 4, -1, -1, -1, -1},
{6, 1, 2, 6, 5, 1, 4, 7, 8, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 5, 5, 2, 6, 3, 0, 4, 3, 4, 7, -1, -1, -1, -1},
{8, 4, 7, 9, 0, 5, 0, 6, 5, 0, 2, 6, -1, -1, -1, -1},
{7, 3, 9, 7, 9, 4, 3, 2, 9, 5, 9, 6, 2, 6, 9, -1},
{3, 11, 2, 7, 8, 4, 10, 6, 5, -1, -1, -1, -1, -1, -1, -1},
{5, 10, 6, 4, 7, 2, 4, 2, 0, 2, 7, 11, -1, -1, -1, -1},
{0, 1, 9, 4, 7, 8, 2, 3, 11, 5, 10, 6, -1, -1, -1, -1},
{9, 2, 1, 9, 11, 2, 9, 4, 11, 7, 11, 4, 5, 10, 6, -1},
{8, 4, 7, 3, 11, 5, 3, 5, 1, 5, 11, 6, -1, -1, -1, -1},
{5, 1, 11, 5, 11, 6, 1, 0, 11, 7, 11, 4, 0, 4, 11, -1},
{0, 5, 9, 0, 6, 5, 0, 3, 6, 11, 6, 3, 8, 4, 7, -1},
{6, 5, 9, 6, 9, 11, 4, 7, 9, 7, 11, 9, -1, -1, -1, -1},
{10, 4, 9, 6, 4, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 10, 6, 4, 9, 10, 0, 8, 3, -1, -1, -1, -1, -1, -1, -1},
{10, 0, 1, 10, 6, 0, 6, 4, 0, -1, -1, -1, -1, -1, -1, -1},
{8, 3, 1, 8, 1, 6, 8, 6, 4, 6, 1, 10, -1, -1, -1, -1},
{1, 4, 9, 1, 2, 4, 2, 6, 4, -1, -1, -1, -1, -1, -1, -1},
{3, 0, 8, 1, 2, 9, 2, 4, 9, 2, 6, 4, -1, -1, -1, -1},
{0, 2, 4, 4, 2, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{8, 3, 2, 8, 2, 4, 4, 2, 6, -1, -1, -1, -1, -1, -1, -1},
{10, 4, 9, 10, 6, 4, 11, 2, 3, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 2, 2, 8, 11, 4, 9, 10, 4, 10, 6, -1, -1, -1, -1},
{3, 11, 2, 0, 1, 6, 0, 6, 4, 6, 1, 10, -1, -1, -1, -1},
{6, 4, 1, 6, 1, 10, 4, 8, 1, 2, 1, 11, 8, 11, 1, -1},
{9, 6, 4, 9, 3, 6, 9, 1, 3, 11, 6, 3, -1, -1, -1, -1},
{8, 11, 1, 8, 1, 0, 11, 6, 1, 9, 1, 4, 6, 4, 1, -1},
{3, 11, 6, 3, 6, 0, 0, 6, 4, -1, -1, -1, -1, -1, -1, -1},
{6, 4, 8, 11, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{7, 10, 6, 7, 8, 10, 8, 9, 10, -1, -1, -1, -1, -1, -1, -1},
{0, 7, 3, 0, 10, 7, 0, 9, 10, 6, 7, 10, -1, -1, -1, -1},
{10, 6, 7, 1, 10, 7, 1, 7, 8, 1, 8, 0, -1, -1, -1, -1},
{10, 6, 7, 10, 7, 1, 1, 7, 3, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 6, 1, 6, 8, 1, 8, 9, 8, 6, 7, -1, -1, -1, -1},
{2, 6, 9, 2, 9, 1, 6, 7, 9, 0, 9, 3, 7, 3, 9, -1},
{7, 8, 0, 7, 0, 6, 6, 0, 2, -1, -1, -1, -1, -1, -1, -1},
{7, 3, 2, 6, 7, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{2, 3, 11, 10, 6, 8, 10, 8, 9, 8, 6, 7, -1, -1, -1, -1},
{2, 0, 7, 2, 7, 11, 0, 9, 7, 6, 7, 10, 9, 10, 7, -1},
{1, 8, 0, 1, 7, 8, 1, 10, 7, 6, 7, 10, 2, 3, 11, -1},
{11, 2, 1, 11, 1, 7, 10, 6, 1, 6, 7, 1, -1, -1, -1, -1},
{8, 9, 6, 8, 6, 7, 9, 1, 6, 11, 6, 3, 1, 3, 6, -1},
{0, 9, 1, 11, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{7, 8, 0, 7, 0, 6, 3, 11, 0, 11, 6, 0, -1, -1, -1, -1},
{7, 11, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{7, 6, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 0, 8, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 1, 9, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{8, 1, 9, 8, 3, 1, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1},
{10, 1, 2, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, 3, 0, 8, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1},
{2, 9, 0, 2, 10, 9, 6, 11, 7, -1, -1, -1, -1, -1, -1, -1},
{6, 11, 7, 2, 10, 3, 10, 8, 3, 10, 9, 8, -1, -1, -1, -1},
{7, 2, 3, 6, 2, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{7, 0, 8, 7, 6, 0, 6, 2, 0, -1, -1, -1, -1, -1, -1, -1},
{2, 7, 6, 2, 3, 7, 0, 1, 9, -1, -1, -1, -1, -1, -1, -1},
{1, 6, 2, 1, 8, 6, 1, 9, 8, 8, 7, 6, -1, -1, -1, -1},
{10, 7, 6, 10, 1, 7, 1, 3, 7, -1, -1, -1, -1, -1, -1, -1},
{10, 7, 6, 1, 7, 10, 1, 8, 7, 1, 0, 8, -1, -1, -1, -1},
{0, 3, 7, 0, 7, 10, 0, 10, 9, 6, 10, 7, -1, -1, -1, -1},
{7, 6, 10, 7, 10, 8, 8, 10, 9, -1, -1, -1, -1, -1, -1, -1},
{6, 8, 4, 11, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 6, 11, 3, 0, 6, 0, 4, 6, -1, -1, -1, -1, -1, -1, -1},
{8, 6, 11, 8, 4, 6, 9, 0, 1, -1, -1, -1, -1, -1, -1, -1},
{9, 4, 6, 9, 6, 3, 9, 3, 1, 11, 3, 6, -1, -1, -1, -1},
{6, 8, 4, 6, 11, 8, 2, 10, 1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, 3, 0, 11, 0, 6, 11, 0, 4, 6, -1, -1, -1, -1},
{4, 11, 8, 4, 6, 11, 0, 2, 9, 2, 10, 9, -1, -1, -1, -1},
{10, 9, 3, 10, 3, 2, 9, 4, 3, 11, 3, 6, 4, 6, 3, -1},
{8, 2, 3, 8, 4, 2, 4, 6, 2, -1, -1, -1, -1, -1, -1, -1},
{0, 4, 2, 4, 6, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 9, 0, 2, 3, 4, 2, 4, 6, 4, 3, 8, -1, -1, -1, -1},
{1, 9, 4, 1, 4, 2, 2, 4, 6, -1, -1, -1, -1, -1, -1, -1},
{8, 1, 3, 8, 6, 1, 8, 4, 6, 6, 10, 1, -1, -1, -1, -1},
{10, 1, 0, 10, 0, 6, 6, 0, 4, -1, -1, -1, -1, -1, -1, -1},
{4, 6, 3, 4, 3, 8, 6, 10, 3, 0, 3, 9, 10, 9, 3, -1},
{10, 9, 4, 6, 10, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 9, 5, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, 4, 9, 5, 11, 7, 6, -1, -1, -1, -1, -1, -1, -1},
{5, 0, 1, 5, 4, 0, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1},
{11, 7, 6, 8, 3, 4, 3, 5, 4, 3, 1, 5, -1, -1, -1, -1},
{9, 5, 4, 10, 1, 2, 7, 6, 11, -1, -1, -1, -1, -1, -1, -1},
{6, 11, 7, 1, 2, 10, 0, 8, 3, 4, 9, 5, -1, -1, -1, -1},
{7, 6, 11, 5, 4, 10, 4, 2, 10, 4, 0, 2, -1, -1, -1, -1},
{3, 4, 8, 3, 5, 4, 3, 2, 5, 10, 5, 2, 11, 7, 6, -1},
{7, 2, 3, 7, 6, 2, 5, 4, 9, -1, -1, -1, -1, -1, -1, -1},
{9, 5, 4, 0, 8, 6, 0, 6, 2, 6, 8, 7, -1, -1, -1, -1},
{3, 6, 2, 3, 7, 6, 1, 5, 0, 5, 4, 0, -1, -1, -1, -1},
{6, 2, 8, 6, 8, 7, 2, 1, 8, 4, 8, 5, 1, 5, 8, -1},
{9, 5, 4, 10, 1, 6, 1, 7, 6, 1, 3, 7, -1, -1, -1, -1},
{1, 6, 10, 1, 7, 6, 1, 0, 7, 8, 7, 0, 9, 5, 4, -1},
{4, 0, 10, 4, 10, 5, 0, 3, 10, 6, 10, 7, 3, 7, 10, -1},
{7, 6, 10, 7, 10, 8, 5, 4, 10, 4, 8, 10, -1, -1, -1, -1},
{6, 9, 5, 6, 11, 9, 11, 8, 9, -1, -1, -1, -1, -1, -1, -1},
{3, 6, 11, 0, 6, 3, 0, 5, 6, 0, 9, 5, -1, -1, -1, -1},
{0, 11, 8, 0, 5, 11, 0, 1, 5, 5, 6, 11, -1, -1, -1, -1},
{6, 11, 3, 6, 3, 5, 5, 3, 1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 10, 9, 5, 11, 9, 11, 8, 11, 5, 6, -1, -1, -1, -1},
{0, 11, 3, 0, 6, 11, 0, 9, 6, 5, 6, 9, 1, 2, 10, -1},
{11, 8, 5, 11, 5, 6, 8, 0, 5, 10, 5, 2, 0, 2, 5, -1},
{6, 11, 3, 6, 3, 5, 2, 10, 3, 10, 5, 3, -1, -1, -1, -1},
{5, 8, 9, 5, 2, 8, 5, 6, 2, 3, 8, 2, -1, -1, -1, -1},
{9, 5, 6, 9, 6, 0, 0, 6, 2, -1, -1, -1, -1, -1, -1, -1},
{1, 5, 8, 1, 8, 0, 5, 6, 8, 3, 8, 2, 6, 2, 8, -1},
{1, 5, 6, 2, 1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 3, 6, 1, 6, 10, 3, 8, 6, 5, 6, 9, 8, 9, 6, -1},
{10, 1, 0, 10, 0, 6, 9, 5, 0, 5, 6, 0, -1, -1, -1, -1},
{0, 3, 8, 5, 6, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{10, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{11, 5, 10, 7, 5, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{11, 5, 10, 11, 7, 5, 8, 3, 0, -1, -1, -1, -1, -1, -1, -1},
{5, 11, 7, 5, 10, 11, 1, 9, 0, -1, -1, -1, -1, -1, -1, -1},
{10, 7, 5, 10, 11, 7, 9, 8, 1, 8, 3, 1, -1, -1, -1, -1},
{11, 1, 2, 11, 7, 1, 7, 5, 1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, 1, 2, 7, 1, 7, 5, 7, 2, 11, -1, -1, -1, -1},
{9, 7, 5, 9, 2, 7, 9, 0, 2, 2, 11, 7, -1, -1, -1, -1},
{7, 5, 2, 7, 2, 11, 5, 9, 2, 3, 2, 8, 9, 8, 2, -1},
{2, 5, 10, 2, 3, 5, 3, 7, 5, -1, -1, -1, -1, -1, -1, -1},
{8, 2, 0, 8, 5, 2, 8, 7, 5, 10, 2, 5, -1, -1, -1, -1},
{9, 0, 1, 5, 10, 3, 5, 3, 7, 3, 10, 2, -1, -1, -1, -1},
{9, 8, 2, 9, 2, 1, 8, 7, 2, 10, 2, 5, 7, 5, 2, -1},
{1, 3, 5, 3, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 7, 0, 7, 1, 1, 7, 5, -1, -1, -1, -1, -1, -1, -1},
{9, 0, 3, 9, 3, 5, 5, 3, 7, -1, -1, -1, -1, -1, -1, -1},
{9, 8, 7, 5, 9, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{5, 8, 4, 5, 10, 8, 10, 11, 8, -1, -1, -1, -1, -1, -1, -1},
{5, 0, 4, 5, 11, 0, 5, 10, 11, 11, 3, 0, -1, -1, -1, -1},
{0, 1, 9, 8, 4, 10, 8, 10, 11, 10, 4, 5, -1, -1, -1, -1},
{10, 11, 4, 10, 4, 5, 11, 3, 4, 9, 4, 1, 3, 1, 4, -1},
{2, 5, 1, 2, 8, 5, 2, 11, 8, 4, 5, 8, -1, -1, -1, -1},
{0, 4, 11, 0, 11, 3, 4, 5, 11, 2, 11, 1, 5, 1, 11, -1},
{0, 2, 5, 0, 5, 9, 2, 11, 5, 4, 5, 8, 11, 8, 5, -1},
{9, 4, 5, 2, 11, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{2, 5, 10, 3, 5, 2, 3, 4, 5, 3, 8, 4, -1, -1, -1, -1},
{5, 10, 2, 5, 2, 4, 4, 2, 0, -1, -1, -1, -1, -1, -1, -1},
{3, 10, 2, 3, 5, 10, 3, 8, 5, 4, 5, 8, 0, 1, 9, -1},
{5, 10, 2, 5, 2, 4, 1, 9, 2, 9, 4, 2, -1, -1, -1, -1},
{8, 4, 5, 8, 5, 3, 3, 5, 1, -1, -1, -1, -1, -1, -1, -1},
{0, 4, 5, 1, 0, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{8, 4, 5, 8, 5, 3, 9, 0, 5, 0, 3, 5, -1, -1, -1, -1},
{9, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 11, 7, 4, 9, 11, 9, 10, 11, -1, -1, -1, -1, -1, -1, -1},
{0, 8, 3, 4, 9, 7, 9, 11, 7, 9, 10, 11, -1, -1, -1, -1},
{1, 10, 11, 1, 11, 4, 1, 4, 0, 7, 4, 11, -1, -1, -1, -1},
{3, 1, 4, 3, 4, 8, 1, 10, 4, 7, 4, 11, 10, 11, 4, -1},
{4, 11, 7, 9, 11, 4, 9, 2, 11, 9, 1, 2, -1, -1, -1, -1},
{9, 7, 4, 9, 11, 7, 9, 1, 11, 2, 11, 1, 0, 8, 3, -1},
{11, 7, 4, 11, 4, 2, 2, 4, 0, -1, -1, -1, -1, -1, -1, -1},
{11, 7, 4, 11, 4, 2, 8, 3, 4, 3, 2, 4, -1, -1, -1, -1},
{2, 9, 10, 2, 7, 9, 2, 3, 7, 7, 4, 9, -1, -1, -1, -1},
{9, 10, 7, 9, 7, 4, 10, 2, 7, 8, 7, 0, 2, 0, 7, -1},
{3, 7, 10, 3, 10, 2, 7, 4, 10, 1, 10, 0, 4, 0, 10, -1},
{1, 10, 2, 8, 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 9, 1, 4, 1, 7, 7, 1, 3, -1, -1, -1, -1, -1, -1, -1},
{4, 9, 1, 4, 1, 7, 0, 8, 1, 8, 7, 1, -1, -1, -1, -1},
{4, 0, 3, 7, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{4, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{9, 10, 8, 10, 11, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 0, 9, 3, 9, 11, 11, 9, 10, -1, -1, -1, -1, -1, -1, -1},
{0, 1, 10, 0, 10, 8, 8, 10, 11, -1, -1, -1, -1, -1, -1, -1},
{3, 1, 10, 11, 3, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 2, 11, 1, 11, 9, 9, 11, 8, -1, -1, -1, -1, -1, -1, -1},
{3, 0, 9, 3, 9, 11, 1, 2, 9, 2, 11, 9, -1, -1, -1, -1},
{0, 2, 11, 8, 0, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{3, 2, 11, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{2, 3, 8, 2, 8, 10, 10, 8, 9, -1, -1, -1, -1, -1, -1, -1},
{9, 10, 2, 0, 9, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{2, 3, 8, 2, 8, 10, 0, 1, 8, 1, 10, 8, -1, -1, -1, -1},
{1, 10, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{1, 3, 8, 9, 1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 9, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{0, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1}};
} // namespace dynafu
} // namespace cv
#endif
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#include <algorithm>
#include "precomp.hpp"
#include "nonrigid_icp.hpp"
#define MAD_SCALE 1.4826f
#define TUKEY_B 4.6851f
#define HUBER_K 1.345f
namespace cv
{
namespace dynafu
{
using namespace kinfu;
NonRigidICP::NonRigidICP(const Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume, int _iterations) :
iterations(_iterations), volume(_volume), intrinsics(_intrinsics)
{}
class ICPImpl : public NonRigidICP
{
public:
ICPImpl(const cv::kinfu::Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume, int _iterations);
virtual bool estimateWarpNodes(WarpField& currentWarp, const Affine3f &pose,
InputArray vertImage, InputArray oldPoints,
InputArray oldNormals, InputArray newPoints,
InputArray newNormals) const override;
virtual ~ICPImpl() {}
private:
float median(std::vector<float> v) const;
float tukeyWeight(float x, float sigma) const;
float huberWeight(Vec3f v, float sigma) const;
};
ICPImpl::ICPImpl(const Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume, int _iterations) :
NonRigidICP(_intrinsics, _volume, _iterations)
{}
static inline bool fastCheck(const Point3f& p)
{
return !cvIsNaN(p.x);
}
float ICPImpl::median(std::vector<float> v) const
{
size_t n = v.size()/2;
if(n == 0) return 0;
std::nth_element(v.begin(), v.begin()+n, v.end());
float vn = v[n];
if(n%2 == 0)
{
std::nth_element(v.begin(), v.begin()+n-1, v.end());
return (vn+v[n-1])/2.f;
}
else
{
return vn;
}
}
float ICPImpl::tukeyWeight(float r, float sigma) const
{
float x = r/sigma;
if(std::abs(x) <= TUKEY_B)
{
float y = 1 - (x*x)/(TUKEY_B * TUKEY_B);
return y*y;
} else return 0;
}
float ICPImpl::huberWeight(Vec3f v, float sigma) const
{
if(sigma == 0) return 0.f;
float x = (float)std::abs(norm(v)/sigma);
return (x > HUBER_K)? HUBER_K/x : 1.f;
}
bool ICPImpl::estimateWarpNodes(WarpField& currentWarp, const Affine3f &pose,
InputArray _vertImage, InputArray _oldPoints,
InputArray _oldNormals, InputArray _newPoints,
InputArray _newNormals) const
{
CV_Assert(_vertImage.isMat());
CV_Assert(_oldPoints.isMat());
CV_Assert(_newPoints.isMat());
CV_Assert(_newNormals.isMat());
Mat vertImage = _vertImage.getMat();
Mat oldPoints = _oldPoints.getMat();
Mat newPoints = _newPoints.getMat();
Mat newNormals = _newNormals.getMat();
Mat oldNormals = _oldNormals.getMat();
CV_Assert(!vertImage.empty());
CV_Assert(!oldPoints.empty());
CV_Assert(!newPoints.empty());
CV_Assert(!newNormals.empty());
const NodeVectorType& warpNodes = currentWarp.getNodes();
Affine3f T_lw = pose.inv() * volume->pose;
// Accumulate regularisation term for each node in the heiarchy
const std::vector<NodeVectorType>& regNodes = currentWarp.getGraphNodes();
const heirarchyType& regGraph = currentWarp.getRegGraph();
int totalNodes = (int)warpNodes.size();
for(const auto& nodes: regNodes) totalNodes += (int)nodes.size();
// level-wise regularisation components of A and b (from Ax = b) for each node in heirarchy
Mat_<float> b_reg(6*totalNodes, 1, 0.f);
Mat_<float> A_reg(6*totalNodes, 6*totalNodes, 0.f);
// indices for each node block to A,b matrices. It determines the order
// in which paramters are laid out
std::vector<int> baseIndices(currentWarp.n_levels, 0);
for(int l = currentWarp.n_levels-2; l >= 0; l--)
{
baseIndices[l] = baseIndices[l+1]+6*((int)regNodes[l].size());
}
for(const int& i: baseIndices) std::cout << i << ", ";
std::cout << std::endl;
// populate residuals for each edge in the graph to calculate sigma
std::vector<float> reg_residuals;
float RegEnergy = 0;
int numEdges = 0;
for(int l = 0; l < (currentWarp.n_levels-1); l++)
{
const std::vector<nodeNeighboursType>& level = regGraph[l];
const NodeVectorType& currentLevelNodes = (l == 0)? warpNodes : regNodes[l-1];
const NodeVectorType& nextLevelNodes = regNodes[l];
std::cout << currentLevelNodes.size() << " " << nextLevelNodes.size() << std::endl;
for(size_t node = 0; node < level.size(); node++)
{
const nodeNeighboursType& children = level[node];
Vec3f nodePos = currentLevelNodes[node]->pos;
Affine3f nodeTransform = currentLevelNodes[node]->transform;
for(int c = 0; c < currentWarp.k; c++)
{
const int child = children[c];
Vec3f childPos = nextLevelNodes[child]->pos;
Vec3f childTranslation = nextLevelNodes[child]->transform.translation();
Vec3f re = nodeTransform * (childPos - nodePos) + nodePos
- (childTranslation + childPos);
numEdges++;
reg_residuals.push_back((float)norm(re));
RegEnergy += (float)norm(re);
}
}
}
Mat_<float> J_reg(6*numEdges, 6*totalNodes, 0.f);
std::cout << "Total reg energy: " << RegEnergy << ", Average: " << RegEnergy/numEdges << std::endl;
float reg_med = median(reg_residuals);
std::for_each(reg_residuals.begin(), reg_residuals.end(),
[reg_med](float& x)
{
x = std::abs(x-reg_med);
});
float reg_sigma = MAD_SCALE * median(reg_residuals);
std::cout << "[Reg] Sigma: " << reg_sigma << " from " << reg_residuals.size() << " residuals " << std::endl;
for(int l = 0; l < (currentWarp.n_levels-1); l++)
{
const std::vector<nodeNeighboursType>& level = regGraph[l];
const NodeVectorType& currentLevelNodes = (l == 0)? warpNodes : regNodes[l-1];
const NodeVectorType& nextLevelNodes = regNodes[l];
for(size_t node = 0; node < level.size(); node++)
{
const nodeNeighboursType& children = level[node];
Vec3f nodePos = currentLevelNodes[node]->pos;
Affine3f nodeTransform = currentLevelNodes[node]->transform;
int parentIndex = baseIndices[l]+6*(int)node;
for(int edge = 0; edge < currentWarp.k; edge++)
{
const int child = children[edge];
const Ptr<WarpNode> childNode = nextLevelNodes[child];
Vec3f childTranslation = childNode->transform.translation();
Vec3f childPos = childNode->pos;
Vec3f transformedChild = nodeTransform * (childPos - nodePos);
Vec3f r_edge = transformedChild + nodePos - (childTranslation + childPos);
if(norm(r_edge) > 0.01) continue;
float robustWeight = huberWeight(r_edge, reg_sigma);
// take sqrt since radius is stored as squared distance
float edgeWeight = sqrt(min(childNode->radius, currentLevelNodes[node]->radius));
Vec3f v1 = transformedChild.cross(r_edge);
float w = 1 * robustWeight * edgeWeight;
b_reg(parentIndex+0) += -w * v1[0];
b_reg(parentIndex+1) += -w * v1[1];
b_reg(parentIndex+2) += -w * v1[2];
b_reg(parentIndex+3) += -w * r_edge[0];
b_reg(parentIndex+4) += -w * r_edge[1];
b_reg(parentIndex+5) += -w * r_edge[2];
int childIndex = baseIndices[l+1]+6*child;
Vec3f v2 = childTranslation.cross(r_edge);
b_reg(childIndex+0) += w * v2[0];
b_reg(childIndex+1) += w * v2[1];
b_reg(childIndex+2) += w * v2[2];
b_reg(childIndex+3) += w * r_edge[0];
b_reg(childIndex+4) += w * r_edge[1];
b_reg(childIndex+5) += w * r_edge[2];
Matx33f Tj_Vj_Vi_cross(0, -transformedChild[2], transformedChild[1],
transformedChild[2], 0, -transformedChild[0],
-transformedChild[1], transformedChild[0], 0);
Matx33f tj_cross(0, -childTranslation[2], childTranslation[1],
childTranslation[2], 0, -childTranslation[0],
-childTranslation[1], childTranslation[0], 0);
// place top left elements
Matx33f top_left = Tj_Vj_Vi_cross * tj_cross;
for(int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
{
A_reg(parentIndex+i, childIndex+j) += w * top_left(i, j);
A_reg(childIndex+i, parentIndex+j) += w * top_left(i, j);
}
// place top right elements
for(int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
{
A_reg(parentIndex+i, childIndex+j+3) += - w * Tj_Vj_Vi_cross(i, j);
A_reg(childIndex+i, parentIndex+j+3) += - w * Tj_Vj_Vi_cross(i, j);
}
// place bottom left elements
for(int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
{
A_reg(parentIndex+i+3, childIndex+j) += w * tj_cross(i, j);
A_reg(childIndex+i+3, parentIndex+j) += w * tj_cross(i, j);
}
// place bottom right elements (which is -I_3)
for(int i = 0; i < 3; i++)
{
A_reg(parentIndex+i+3, childIndex+i+3) += -w;
A_reg(childIndex+i+3, parentIndex+i+3) += -w;
}
}
}
}
std::vector<float> residuals;
Mat Vg(oldPoints.size(), CV_32FC3, nan3);
Mat Vc(oldPoints.size(), CV_32FC3, nan3);
Mat Nc(oldPoints.size(), CV_32FC3, nan3);
cv::kinfu::Intr::Projector proj = intrinsics.makeProjector();
for (int y = 0; y < oldPoints.size().height; y++)
{
for (int x = 0; x < oldPoints.size().width; x++)
{
// Obtain correspondence by projecting Tu_Vg
Vec3f curV = oldPoints.at<Vec3f>(y, x);
if (curV == Vec3f::all(0) || cvIsNaN(curV[0]) || cvIsNaN(curV[1]) || cvIsNaN(curV[2]))
continue;
Point2f newCoords = proj(oldPoints.at<Point3f>(y, x));
if(!(newCoords.x >= 0 && newCoords.x < newPoints.cols - 1 &&
newCoords.y >= 0 && newCoords.y < newPoints.rows - 1))
continue;
// TODO: interpolate Vg instead of simply converting projected coords to int
Vg.at<Vec3f>(y, x) = vertImage.at<Vec3f>((int)newCoords.y, (int)newCoords.x);
// bilinearly interpolate newPoints under newCoords point
int xi = cvFloor(newCoords.x), yi = cvFloor(newCoords.y);
float tx = newCoords.x - xi, ty = newCoords.y - yi;
const ptype* prow0 = newPoints.ptr<ptype>(yi+0);
const ptype* prow1 = newPoints.ptr<ptype>(yi+1);
Point3f p00 = fromPtype(prow0[xi+0]);
Point3f p01 = fromPtype(prow0[xi+1]);
Point3f p10 = fromPtype(prow1[xi+0]);
Point3f p11 = fromPtype(prow1[xi+1]);
//do not fix missing data
if(!(fastCheck(p00) && fastCheck(p01) &&
fastCheck(p10) && fastCheck(p11)))
continue;
Point3f p0 = p00 + tx*(p01 - p00);
Point3f p1 = p10 + tx*(p11 - p10);
Point3f newP = (p0 + ty*(p1 - p0));
const ptype* nrow0 = newNormals.ptr<ptype>(yi+0);
const ptype* nrow1 = newNormals.ptr<ptype>(yi+1);
Point3f n00 = fromPtype(nrow0[xi+0]);
Point3f n01 = fromPtype(nrow0[xi+1]);
Point3f n10 = fromPtype(nrow1[xi+0]);
Point3f n11 = fromPtype(nrow1[xi+1]);
if(!(fastCheck(n00) && fastCheck(n01) &&
fastCheck(n10) && fastCheck(n11)))
continue;
Point3f n0 = n00 + tx*(n01 - n00);
Point3f n1 = n10 + tx*(n11 - n10);
Point3f newN = n0 + ty*(n1 - n0);
Vc.at<Point3f>(y, x) = newP;
Nc.at<Point3f>(y, x) = newN;
Vec3f diff = oldPoints.at<Vec3f>(y, x) - Vec3f(newP);
if(diff.dot(diff) > 0.0004f) continue;
if(abs(newN.dot(oldNormals.at<Point3f>(y, x))) < cos((float)CV_PI / 2)) continue;
float rd = newN.dot(diff);
residuals.push_back(rd);
}
}
float med = median(residuals);
std::for_each(residuals.begin(), residuals.end(), [med](float& x){x = std::abs(x-med);});
std::cout << "median: " << med << " from " << residuals.size() << " residuals " << std::endl;
float sigma = MAD_SCALE * median(residuals);
float total_error = 0;
int pix_count = 0;
for(int y = 0; y < oldPoints.size().height; y++)
{
for(int x = 0; x < oldPoints.size().width; x++)
{
Vec3f curV = oldPoints.at<Vec3f>(y, x);
if (curV == Vec3f::all(0) || cvIsNaN(curV[0]))
continue;
Vec3f V = Vg.at<Vec3f>(y, x);
if (V == Vec3f::all(0) || cvIsNaN(V[0]))
continue;
V[0] *= volume->volResolution.x;
V[1] *= volume->volResolution.y;
V[2] *= volume->volResolution.z;
if(!fastCheck(Vc.at<Point3f>(y, x)))
continue;
if(!fastCheck(Nc.at<Point3f>(y, x)))
continue;
Point3i p((int)V[0], (int)V[1], (int)V[2]);
Vec3f diff = oldPoints.at<Vec3f>(y, x) - Vc.at<Vec3f>(y, x);
float rd = Nc.at<Vec3f>(y, x).dot(diff);
total_error += tukeyWeight(rd, sigma) * rd * rd;
pix_count++;
int n;
nodeNeighboursType neighbours = volume->getVoxelNeighbours(p, n);
float totalNeighbourWeight = 0.f;
float neighWeights[DYNAFU_MAX_NEIGHBOURS];
for (int i = 0; i < n; i++)
{
int neigh = neighbours[i];
neighWeights[i] = warpNodes[neigh]->weight(Point3f(V)*volume->voxelSize);
totalNeighbourWeight += neighWeights[i];
}
if(totalNeighbourWeight < 1e-5) continue;
for (int i = 0; i < n; i++)
{
if(neighWeights[i] < 0.01) continue;
int neigh = neighbours[i];
Vec3f Tj_Vg_Vj = (warpNodes[neigh]->transform *
(Point3f(V)*volume->voxelSize - warpNodes[neigh]->pos));
Matx33f Tj_Vg_Vj_x(0, -Tj_Vg_Vj[2], Tj_Vg_Vj[1],
Tj_Vg_Vj[2], 0, -Tj_Vg_Vj[0],
-Tj_Vg_Vj[1], Tj_Vg_Vj[0], 0);
Vec3f v1 = (Tj_Vg_Vj_x * T_lw.rotation().t()) * Nc.at<Vec3f>(y, x);
Vec3f v2 = T_lw.rotation().t() * Nc.at<Vec3f>(y, x);
Matx61f J_dataT(v1[0], v1[1], v1[2], v2[0], v2[1], v2[2]);
Matx16f J_data = J_dataT.t();
Matx66f H_data = J_dataT * J_data;
float w = (neighWeights[i] / totalNeighbourWeight);
float robustWeight = tukeyWeight(rd, sigma);
int blockIndex = baseIndices[0]+6*neigh;
for(int row = 0; row < 6; row++)
for(int col = 0; col < 6; col++)
A_reg(blockIndex+row, blockIndex+col) += robustWeight * w * w * H_data(row, col);
for(int row = 0; row < 6; row++)
b_reg(blockIndex+row) += -robustWeight * rd * w * J_dataT(row);
}
}
}
double det = determinant(A_reg);
std::cout << "A_reg det:" << det << std::endl;
std::cout << "Solving " << 6*totalNodes << std::endl;
Mat_<float> nodeTwists(6*totalNodes, 1, 0.f);
bool result = solve(A_reg, b_reg, nodeTwists, DECOMP_SVD);
std::cout << "Done " << result << std::endl;
for(int i = 0; i < (int)warpNodes.size(); i++)
{
int idx = baseIndices[0]+6*i;
Vec3f r(nodeTwists(idx), nodeTwists(idx+1), nodeTwists(idx+2));
Vec3f t(nodeTwists(idx+3), nodeTwists(idx+4), nodeTwists(idx+5));
Affine3f tinc(r, t);
warpNodes[i]->transform = warpNodes[i]->transform * tinc;
}
std::cout << "Nan count: " << "/" << warpNodes.size() << "\n" << std::endl;
return true;
}
cv::Ptr<NonRigidICP> makeNonRigidICP(const cv::kinfu::Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume,
int _iterations)
{
return makePtr<ICPImpl>(_intrinsics, _volume, _iterations);
}
} // namespace dynafu
} // namespace cv
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#ifndef __OPENCV_DYNAFU_NONRIGID_ICP_H__
#define __OPENCV_DYNAFU_NONRIGID_ICP_H__
#include "precomp.hpp"
#include "kinfu_frame.hpp"
#include "warpfield.hpp"
#include "dynafu_tsdf.hpp"
namespace cv {
namespace dynafu {
class NonRigidICP
{
public:
NonRigidICP(const cv::kinfu::Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume, int _iterations);
virtual bool estimateWarpNodes(WarpField& currentWarp, const Affine3f& pose,
InputArray vertImage, InputArray oldPoints,
InputArray oldNormals,
InputArray newPoints, InputArray newNormals) const = 0;
virtual ~NonRigidICP() { }
protected:
int iterations;
const cv::Ptr<TSDFVolume>& volume;
cv::kinfu::Intr intrinsics;
};
cv::Ptr<NonRigidICP> makeNonRigidICP(const cv::kinfu::Intr _intrinsics, const cv::Ptr<TSDFVolume>& _volume,
int _iterations);
} // namespace dynafu
} // namespace cv
#endif
#include "precomp.hpp"
#include "warpfield.hpp"
namespace cv {
namespace dynafu {
WarpField::WarpField(int _maxNeighbours, int K, int levels, float baseResolution, float resolutionGrowth):
k(K), n_levels(levels),
nodes(), maxNeighbours(_maxNeighbours), // good amount for dense kinfu pointclouds
baseRes(baseResolution),
resGrowthRate(resolutionGrowth),
regGraphNodes(n_levels-1),
heirarchy(n_levels-1),
nodeIndex(nullptr)
{
CV_Assert(k <= DYNAFU_MAX_NEIGHBOURS);
}
NodeVectorType const& WarpField::getNodes() const
{
return nodes;
}
std::vector<NodeVectorType> const& WarpField::getGraphNodes() const
{
return regGraphNodes;
}
heirarchyType const& WarpField::getRegGraph() const
{
return heirarchy;
}
bool PtCmp(cv::Point3f a, cv::Point3f b)
{
return (a.x < b.x) ||
((a.x >= b.x) && (a.y < b.y)) ||
((a.x >= b.x) && (a.y >= b.y) && (a.z < b.z));
}
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > WarpField::getNodeIndex() const
{
return nodeIndex;
}
Mat getNodesPos(NodeVectorType nv)
{
Mat nodePos((int)nv.size(), 3, CV_32F);
for(int i = 0; i < (int)nv.size(); i++)
{
nodePos.at<float>(i, 0) = nv[i]->pos.x;
nodePos.at<float>(i, 1) = nv[i]->pos.y;
nodePos.at<float>(i, 2) = nv[i]->pos.z;
}
return nodePos;
}
void WarpField::updateNodesFromPoints(InputArray inputPoints)
{
Mat points_matrix(inputPoints.size().height, 3, CV_32F);
if(inputPoints.channels() == 1)
{
points_matrix = inputPoints.getMat().colRange(0, 3);
}
else
{
points_matrix = inputPoints.getMat().reshape(1).colRange(0, 3).clone();
}
cvflann::LinearIndexParams params;
flann::GenericIndex<flann::L2_Simple <float> > searchIndex(points_matrix, params);
AutoBuffer<bool> validIndex;
removeSupported(searchIndex, validIndex);
NodeVectorType newNodes;
if((int)nodes.size() > k)
{
newNodes = subsampleIndex(points_matrix, searchIndex, validIndex, baseRes, nodeIndex);
}
else
{
newNodes = subsampleIndex(points_matrix, searchIndex, validIndex, baseRes);
}
initTransforms(newNodes);
nodes.insert(nodes.end(), newNodes.begin(), newNodes.end());
nodesPos = getNodesPos(nodes);
//re-build index
nodeIndex = new flann::GenericIndex<flann::L2_Simple<float> >(nodesPos,
cvflann::LinearIndexParams());
constructRegGraph();
}
void WarpField::removeSupported(flann::GenericIndex<flann::L2_Simple<float> >& ind,
AutoBuffer<bool>& validInd)
{
validInd.allocate(ind.size());
std::fill_n(validInd.data(), ind.size(), true);
for(WarpNode* n: nodes)
{
std::vector<float> query = {n->pos.x, n->pos.y, n->pos.z};
std::vector<int> indices_vec(maxNeighbours);
std::vector<float> dists_vec(maxNeighbours);
ind.radiusSearch(query, indices_vec, dists_vec, n->radius, cvflann::SearchParams());
for(auto i: indices_vec)
{
validInd[i] = false;
}
}
}
NodeVectorType WarpField::subsampleIndex(Mat& pmat,
flann::GenericIndex<flann::L2_Simple<float> >& ind,
AutoBuffer<bool>& validIndex, float res,
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > knnIndex)
{
CV_TRACE_FUNCTION();
NodeVectorType temp_nodes;
for(int i = 0; i < (int)validIndex.size(); i++)
{
if(!validIndex[i])
{
continue;
}
std::vector<int> indices_vec(maxNeighbours);
std::vector<float> dist_vec(maxNeighbours);
ind.radiusSearch(pmat.row(i), indices_vec, dist_vec, res, cvflann::SearchParams());
Ptr<WarpNode> wn = new WarpNode;
Point3f centre(0, 0, 0);
float len = 0;
for(int index: indices_vec)
{
if(validIndex[index])
{
centre += Point3f(pmat.at<float>(index, 0),
pmat.at<float>(index, 1),
pmat.at<float>(index, 2));
len++;
}
}
centre /= len;
wn->pos = centre;
for(auto index: indices_vec)
{
validIndex[index] = false;
}
std::vector<int> knn_indices(k+1, 0);
std::vector<float> knn_dists(k+1, 0);
std::vector<float> query = {wn->pos.x, wn->pos.y, wn->pos.z};
if(knnIndex != nullptr)
{
knnIndex->knnSearch(query, knn_indices, knn_dists, (k+1), cvflann::SearchParams());
wn->radius = *std::max_element(knn_dists.begin(), knn_dists.end());
}
else
{
wn->radius = res;
}
wn->transform = Affine3f::Identity();
temp_nodes.push_back(wn);
}
return temp_nodes;
}
void WarpField::initTransforms(NodeVectorType nv)
{
if(nodesPos.size().height == 0)
{
return;
}
for(auto nodePtr: nv)
{
std::vector<int> knnIndices(k);
std::vector<float> knnDists(k);
std::vector<float> query = {nodePtr->pos.x, nodePtr->pos.y, nodePtr->pos.z};
nodeIndex->knnSearch(query ,knnIndices, knnDists, k, cvflann::SearchParams());
std::vector<float> weights(knnIndices.size());
std::vector<Affine3f> transforms(knnIndices.size());
size_t i = 0;
for(int idx: knnIndices)
{
weights[i] = nodes[idx]->weight(nodePtr->pos);
transforms[i++] = nodes[idx]->transform;
}
Affine3f pose = DQB(weights, transforms);
// linearly interpolate translations
Vec3f translation(0,0,0);
float totalWeight = 0;
for(i = 0; i < transforms.size(); i++)
{
translation += weights[i]*transforms[i].translation();
totalWeight += weights[i];
}
if(totalWeight < 1e-5) translation = Vec3f(0, 0, 0);
else translation /= totalWeight;
nodePtr->transform = Affine3f(pose.rotation(), translation);
}
}
void WarpField::constructRegGraph()
{
CV_TRACE_FUNCTION();
regGraphNodes.clear();
if(n_levels == 1) // First level (Nwarp) is already stored in nodes
{
return;
}
float effResolution = baseRes*resGrowthRate;
NodeVectorType curNodes = nodes;
Mat curNodeMatrix = getNodesPos(curNodes);
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > curNodeIndex(
new flann::GenericIndex<flann::L2_Simple<float> >(curNodeMatrix,
cvflann::LinearIndexParams()));
for(int l = 0; l < (n_levels-1); l++)
{
AutoBuffer<bool> nodeValidity;
nodeValidity.allocate(curNodeIndex->size());
std::fill_n(nodeValidity.data(), curNodeIndex->size(), true);
NodeVectorType coarseNodes = subsampleIndex(curNodeMatrix, *curNodeIndex, nodeValidity,
effResolution);
initTransforms(coarseNodes);
Mat coarseNodeMatrix = getNodesPos(coarseNodes);
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > coarseNodeIndex(
new flann::GenericIndex<flann::L2_Simple<float> >(coarseNodeMatrix,
cvflann::LinearIndexParams()));
heirarchy[l] = std::vector<nodeNeighboursType>(curNodes.size());
for(int i = 0; i < (int)curNodes.size(); i++)
{
std::vector<int> children_indices(k);
std::vector<float> children_dists(k);
std::vector<float> query = {curNodeMatrix.at<float>(i, 0),
curNodeMatrix.at<float>(i, 1),
curNodeMatrix.at<float>(i, 2)};
coarseNodeIndex->knnSearch(query, children_indices, children_dists, k,
cvflann::SearchParams());
heirarchy[l][i].fill(-1);
std::copy(children_indices.begin(), children_indices.end(), heirarchy[l][i].begin());
}
regGraphNodes.push_back(coarseNodes);
curNodes = coarseNodes;
curNodeMatrix = coarseNodeMatrix;
curNodeIndex = coarseNodeIndex;
effResolution *= resGrowthRate;
}
}
Point3f WarpField::applyWarp(Point3f p, const nodeNeighboursType neighbours, int n, bool normal) const
{
CV_TRACE_FUNCTION();
if(n == 0)
{
return p;
}
float totalWeight = 0;
Point3f WarpedPt(0,0,0);
for(int i = 0; i < n; i++)
{
Ptr<WarpNode> neigh = nodes[neighbours[i]];
float w = neigh->weight(p);
if(w < 0.01)
{
continue;
}
Matx33f R = neigh->transform.rotation();
Point3f newPt(0, 0, 0);
if(normal)
{
newPt = R * p;
}
else
{
newPt = R * (p - neigh->pos) + neigh->pos;
Vec3f T = neigh->transform.translation();
newPt.x += T[0];
newPt.y += T[1];
newPt.z += T[2];
}
WarpedPt += newPt * w;
totalWeight += w;
}
WarpedPt /= totalWeight;
if(totalWeight == 0)
{
return p;
}
else
{
return WarpedPt;
}
}
void WarpField::setAllRT(Affine3f warpRT)
{
for(auto n: nodes)
{
n->transform = warpRT;
}
}
} // namepsace dynafu
} // namespace cv
#ifndef __OPENCV_RGBD_WARPFIELD_HPP__
#define __OPENCV_RGBD_WARPFIELD_HPP__
#include "opencv2/core.hpp"
#include "opencv2/flann.hpp"
#include "dqb.hpp"
#define DYNAFU_MAX_NEIGHBOURS 10
typedef std::array<int, DYNAFU_MAX_NEIGHBOURS> nodeNeighboursType;
typedef std::vector<std::vector<nodeNeighboursType> > heirarchyType;
namespace cv {
namespace dynafu {
struct WarpNode
{
Point3f pos;
float radius;
Affine3f transform;
float weight(Point3f x)
{
Point3f diff = pos - x;
float L2 = diff.x*diff.x + diff.y*diff.y + diff.z*diff.z;
return expf(-L2/(2.f*radius));
}
};
typedef std::vector<Ptr<WarpNode> > NodeVectorType;
class WarpField
{
public:
WarpField(int _maxNeighbours=1000000, int K=4, int levels=4, float baseResolution=.10f,
float resolutionGrowth=4);
void updateNodesFromPoints(InputArray _points);
const NodeVectorType& getNodes() const;
const heirarchyType& getRegGraph() const;
const std::vector<NodeVectorType>& getGraphNodes() const;
size_t getNodesLen() const
{
return nodes.size();
}
Point3f applyWarp(Point3f p, const nodeNeighboursType neighbours, int n, bool normal=false) const;
void setAllRT(Affine3f warpRT);
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > getNodeIndex() const;
inline void findNeighbours(Point3f queryPt, std::vector<int>& indices, std::vector<float>& dists)
{
std::vector<float> query = {queryPt.x, queryPt.y, queryPt.z};
nodeIndex->knnSearch(query, indices, dists, k, cvflann::SearchParams());
}
int k; //k-nearest neighbours will be used
int n_levels; // number of levels in the heirarchy
private:
void removeSupported(flann::GenericIndex<flann::L2_Simple<float> >& ind, AutoBuffer<bool>& supInd);
NodeVectorType subsampleIndex(Mat& pmat, flann::GenericIndex<flann::L2_Simple<float> >& ind,
AutoBuffer<bool>& supInd, float res,
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > knnIndex = nullptr);
void constructRegGraph();
void initTransforms(NodeVectorType nv);
NodeVectorType nodes; //heirarchy level 0
int maxNeighbours;
float baseRes;
float resGrowthRate;
std::vector<NodeVectorType> regGraphNodes; // heirarchy levels 1 to L
heirarchyType heirarchy;
Ptr<flann::GenericIndex<flann::L2_Simple<float> > > nodeIndex;
Mat nodesPos;
};
bool PtCmp(cv::Point3f a, cv::Point3f b);
Mat getNodesPos(NodeVectorType nv);
} // namepsace dynafu
} // namespace cv
#endif
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html
// This code is also subject to the license terms in the LICENSE_KinectFusion.md file found in this module's directory
#include "test_precomp.hpp"
#ifdef HAVE_OPENGL
namespace opencv_test { namespace {
static std::vector<std::string> readDepth(std::string fileList)
{
std::vector<std::string> v;
std::fstream file(fileList);
if(!file.is_open())
throw std::runtime_error("Failed to read depth list");
std::string dir;
size_t slashIdx = fileList.rfind('/');
slashIdx = slashIdx != std::string::npos ? slashIdx : fileList.rfind('\\');
dir = fileList.substr(0, slashIdx);
while(!file.eof())
{
std::string s, imgPath;
std::getline(file, s);
if(s.empty() || s[0] == '#') continue;
std::stringstream ss;
ss << s;
double thumb;
ss >> thumb >> imgPath;
v.push_back(dir+'/'+imgPath);
}
return v;
}
static const bool display = false;
void flyTest(bool hiDense, bool inequal)
{
Ptr<dynafu::Params> params;
if(hiDense)
params = dynafu::Params::defaultParams();
else
params = dynafu::Params::coarseParams();
if(inequal)
{
params->volumeDims[0] += 32;
params->volumeDims[1] -= 32;
}
std::vector<String> depths = readDepth(cvtest::TS::ptr()->get_data_path() + "dynafu/depth.txt");
CV_Assert(!depths.empty());
Ptr<dynafu::DynaFu> df = dynafu::DynaFu::create(params);
// Check for first 10 frames
CV_Assert(depths.size() >= 10);
Mat currentDepth, prevDepth;
for(size_t i = 0; i < 10; i++)
{
currentDepth = cv::imread(depths[i], IMREAD_ANYDEPTH);
ASSERT_TRUE(df->update(currentDepth));
Mat renderedDepth;
df->renderSurface(renderedDepth, noArray(), noArray());
if(i > 0)
{
// Check if estimated depth aligns with actual depth in the previous frame
Mat depthCvt8, renderCvt8;
convertScaleAbs(prevDepth, depthCvt8, 0.25*256. / params->depthFactor);
convertScaleAbs(renderedDepth, renderCvt8, 0.33*255, -0.5*0.33*255);
Mat diff;
absdiff(depthCvt8, renderCvt8, diff);
Scalar_<float> mu, sigma;
meanStdDev(diff, mu, sigma);
std::cout << "Mean: " << mu[0] << ", Std dev: " << sigma[0] << std::endl;
}
if(display)
{
imshow("depth", currentDepth*(1.f/params->depthFactor/4.f));
Mat rendered;
df->render(rendered);
imshow("render", rendered);
waitKey(10);
}
currentDepth.copyTo(prevDepth);
}
}
/*
#ifdef OPENCV_ENABLE_NONFREE
TEST( DynamicFusion, lowDense )
#else
TEST(DynamicFusion, DISABLED_lowDense)
#endif
{
flyTest(false, false);
}
#ifdef OPENCV_ENABLE_NONFREE
TEST( DynamicFusion, inequal )
#else
TEST(DynamicFusion, DISABLED_inequal)
#endif
{
flyTest(false, true);
}
*/
// To enable DynamicFusion tests, uncomment the above lines and delete the following lines
TEST(DynamicFusion, DISABLED)
{
CV_UNUSED(flyTest);
}
}} // namespace
#endif
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment