Skip to content
Projects
Groups
Snippets
Help
Loading...
Sign in / Register
Toggle navigation
F
ffmpeg
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
submodule
ffmpeg
Commits
dd2793c8
Commit
dd2793c8
authored
May 28, 2011
by
Stefano Sabatini
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
lavfi: add LUT (LookUp Table) generic filters
parent
83f9bc8a
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
488 additions
and
1 deletion
+488
-1
Changelog
Changelog
+1
-0
filters.texi
doc/filters.texi
+112
-0
Makefile
libavfilter/Makefile
+3
-0
allfilters.c
libavfilter/allfilters.c
+3
-0
avfilter.h
libavfilter/avfilter.h
+1
-1
vf_lut.c
libavfilter/vf_lut.c
+368
-0
No files found.
Changelog
View file @
dd2793c8
...
...
@@ -16,6 +16,7 @@ version 0.7:
- All av_metadata_* functions renamed to av_dict_* and moved to libavutil
- 4:4:4 H.264 decoding support
- 10-bit H.264 optimizations for x86
- lut, lutrgb, and lutyuv filters added
version 0.7_beta2:
...
...
doc/filters.texi
View file @
dd2793c8
...
...
@@ -701,6 +701,118 @@ a float number which specifies chroma temporal strength, defaults to
@var{luma_tmp}*@var{chroma_spatial}/@var{luma_spatial}
@end table
@section lut, lutrgb, lutyuv
Compute a look-up table for binding each pixel component input value
to an output value, and apply it to input video.
@var{lutyuv} applies a lookup table to a YUV input video, @var{lutrgb}
to an RGB input video.
These filters accept in input a ":"-separated list of options, which
specify the expressions used for computing the lookup table for the
corresponding pixel component values.
The @var{lut} filter requires either YUV or RGB pixel formats in
input, and accepts the options:
@table @option
@var{c0} (first pixel component)
@var{c1} (second pixel component)
@var{c2} (third pixel component)
@var{c3} (fourth pixel component, corresponds to the alpha component)
@end table
The exact component associated to each option depends on the format in
input.
The @var{lutrgb} filter requires RGB pixel formats in input, and
accepts the options:
@table @option
@var{r} (red component)
@var{g} (green component)
@var{b} (blue component)
@var{a} (alpha component)
@end table
The @var{lutyuv} filter requires YUV pixel formats in input, and
accepts the options:
@table @option
@var{y} (Y/luminance component)
@var{u} (U/Cb component)
@var{v} (V/Cr component)
@var{a} (alpha component)
@end table
The expressions can contain the following constants and functions:
@table @option
@item E, PI, PHI
the corresponding mathematical approximated values for e
(euler number), pi (greek PI), PHI (golden ratio)
@item w, h
the input width and heigth
@item val
input value for the pixel component
@item clipval
the input value clipped in the @var{minval}-@var{maxval} range
@item maxval
maximum value for the pixel component
@item minval
minimum value for the pixel component
@item negval
the negated value for the pixel component value clipped in the
@var{minval}-@var{maxval} range , it corresponds to the expression
"maxval-clipval+minval"
@item clip(val)
the computed value in @var{val} clipped in the
@var{minval}-@var{maxval} range
@item gammaval(gamma)
the computed gamma correction value of the pixel component value
clipped in the @var{minval}-@var{maxval} range, corresponds to the
expression
"pow((clipval-minval)/(maxval-minval)\,@var{gamma})*(maxval-minval)+minval"
@end table
All expressions default to "val".
Some examples follow:
@example
# negate input video
lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
# the above is the same as
lutrgb="r=negval:g=negval:b=negval"
lutyuv="y=negval:u=negval:v=negval"
# negate luminance
lutyuv=negval
# remove chroma components, turns the video into a graytone image
lutyuv="u=128:v=128"
# apply a luma burning effect
lutyuv="y=2*val"
# remove green and blue components
lutrgb="g=0:b=0"
# set a constant alpha channel value on input
format=rgba,lutrgb=a="maxval-minval/2"
# correct luminance gamma by a 0.5 factor
lutyuv=y=gammaval(0.5)
@end example
@section mp
Apply an MPlayer filter to the input video.
...
...
libavfilter/Makefile
View file @
dd2793c8
...
...
@@ -38,6 +38,9 @@ OBJS-$(CONFIG_FREI0R_FILTER) += vf_frei0r.o
OBJS-$(CONFIG_GRADFUN_FILTER)
+=
vf_gradfun.o
OBJS-$(CONFIG_HFLIP_FILTER)
+=
vf_hflip.o
OBJS-$(CONFIG_HQDN3D_FILTER)
+=
vf_hqdn3d.o
OBJS-$(CONFIG_LUT_FILTER)
+=
vf_lut.o
OBJS-$(CONFIG_LUTRGB_FILTER)
+=
vf_lut.o
OBJS-$(CONFIG_LUTYUV_FILTER)
+=
vf_lut.o
OBJS-$(CONFIG_MP_FILTER)
+=
vf_mp.o
OBJS-$(CONFIG_NOFORMAT_FILTER)
+=
vf_format.o
OBJS-$(CONFIG_NULL_FILTER)
+=
vf_null.o
...
...
libavfilter/allfilters.c
View file @
dd2793c8
...
...
@@ -54,6 +54,9 @@ void avfilter_register_all(void)
REGISTER_FILTER
(
GRADFUN
,
gradfun
,
vf
);
REGISTER_FILTER
(
HFLIP
,
hflip
,
vf
);
REGISTER_FILTER
(
HQDN3D
,
hqdn3d
,
vf
);
REGISTER_FILTER
(
LUT
,
lut
,
vf
);
REGISTER_FILTER
(
LUTRGB
,
lutrgb
,
vf
);
REGISTER_FILTER
(
LUTYUV
,
lutyuv
,
vf
);
REGISTER_FILTER
(
MP
,
mp
,
vf
);
REGISTER_FILTER
(
NOFORMAT
,
noformat
,
vf
);
REGISTER_FILTER
(
NULL
,
null
,
vf
);
...
...
libavfilter/avfilter.h
View file @
dd2793c8
...
...
@@ -26,7 +26,7 @@
#include "libavutil/samplefmt.h"
#define LIBAVFILTER_VERSION_MAJOR 2
#define LIBAVFILTER_VERSION_MINOR 1
8
#define LIBAVFILTER_VERSION_MINOR 1
9
#define LIBAVFILTER_VERSION_MICRO 0
#define LIBAVFILTER_VERSION_INT AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR, \
...
...
libavfilter/vf_lut.c
0 → 100644
View file @
dd2793c8
/*
* Copyright (c) 2011 Stefano Sabatini
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Compute a look-up table for binding the input value to the output
* value, and apply it to input video.
*/
#include "libavutil/eval.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "avfilter.h"
static
const
char
*
var_names
[]
=
{
"E"
,
"PHI"
,
"PI"
,
"w"
,
///< width of the input video
"h"
,
///< height of the input video
"val"
,
///< input value for the pixel
"maxval"
,
///< max value for the pixel
"minval"
,
///< min value for the pixel
"negval"
,
///< negated value
"clipval"
,
NULL
};
enum
var_name
{
VAR_E
,
VAR_PHI
,
VAR_PI
,
VAR_W
,
VAR_H
,
VAR_VAL
,
VAR_MAXVAL
,
VAR_MINVAL
,
VAR_NEGVAL
,
VAR_CLIPVAL
,
VAR_VARS_NB
};
typedef
struct
{
const
AVClass
*
class
;
uint8_t
lut
[
4
][
256
];
///< lookup table for each component
char
*
comp_expr_str
[
4
];
AVExpr
*
comp_expr
[
4
];
int
hsub
,
vsub
;
double
var_values
[
VAR_VARS_NB
];
int
is_rgb
,
is_yuv
;
int
rgba_map
[
4
];
int
step
;
}
LutContext
;
#define Y 0
#define U 1
#define V 2
#define R 0
#define G 1
#define B 2
#define A 3
#define OFFSET(x) offsetof(LutContext, x)
static
const
AVOption
lut_options
[]
=
{
{
"c0"
,
"set component #0 expression"
,
OFFSET
(
comp_expr_str
[
0
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"c1"
,
"set component #1 expression"
,
OFFSET
(
comp_expr_str
[
1
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"c2"
,
"set component #2 expression"
,
OFFSET
(
comp_expr_str
[
2
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"c3"
,
"set component #3 expression"
,
OFFSET
(
comp_expr_str
[
3
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"y"
,
"set Y expression"
,
OFFSET
(
comp_expr_str
[
Y
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"u"
,
"set U expression"
,
OFFSET
(
comp_expr_str
[
U
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"v"
,
"set V expression"
,
OFFSET
(
comp_expr_str
[
V
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"r"
,
"set R expression"
,
OFFSET
(
comp_expr_str
[
R
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"g"
,
"set G expression"
,
OFFSET
(
comp_expr_str
[
G
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"b"
,
"set B expression"
,
OFFSET
(
comp_expr_str
[
B
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
"a"
,
"set A expression"
,
OFFSET
(
comp_expr_str
[
A
]),
FF_OPT_TYPE_STRING
,
{.
str
=
"val"
},
CHAR_MIN
,
CHAR_MAX
},
{
NULL
},
};
static
const
char
*
lut_get_name
(
void
*
ctx
)
{
return
"lut"
;
}
static
const
AVClass
lut_class
=
{
"LutContext"
,
lut_get_name
,
lut_options
};
static
int
init
(
AVFilterContext
*
ctx
,
const
char
*
args
,
void
*
opaque
)
{
LutContext
*
lut
=
ctx
->
priv
;
int
ret
;
lut
->
class
=
&
lut_class
;
av_opt_set_defaults2
(
lut
,
0
,
0
);
lut
->
var_values
[
VAR_PHI
]
=
M_PHI
;
lut
->
var_values
[
VAR_PI
]
=
M_PI
;
lut
->
var_values
[
VAR_E
]
=
M_E
;
lut
->
is_rgb
=
!
strcmp
(
ctx
->
filter
->
name
,
"lutrgb"
);
lut
->
is_yuv
=
!
strcmp
(
ctx
->
filter
->
name
,
"lutyuv"
);
if
(
args
&&
(
ret
=
av_set_options_string
(
lut
,
args
,
"="
,
":"
))
<
0
)
return
ret
;
return
0
;
}
static
av_cold
void
uninit
(
AVFilterContext
*
ctx
)
{
LutContext
*
lut
=
ctx
->
priv
;
int
i
;
for
(
i
=
0
;
i
<
4
;
i
++
)
{
av_expr_free
(
lut
->
comp_expr
[
i
]);
lut
->
comp_expr
[
i
]
=
NULL
;
av_freep
(
&
lut
->
comp_expr_str
[
i
]);
}
}
#define YUV_FORMATS \
PIX_FMT_YUV444P, PIX_FMT_YUV422P, PIX_FMT_YUV420P, \
PIX_FMT_YUV411P, PIX_FMT_YUV410P, PIX_FMT_YUV440P, \
PIX_FMT_YUVA420P, \
PIX_FMT_YUVJ444P, PIX_FMT_YUVJ422P, PIX_FMT_YUVJ420P, \
PIX_FMT_YUVJ440P
#define RGB_FORMATS \
PIX_FMT_ARGB, PIX_FMT_RGBA, \
PIX_FMT_ABGR, PIX_FMT_BGRA, \
PIX_FMT_RGB24, PIX_FMT_BGR24
static
enum
PixelFormat
yuv_pix_fmts
[]
=
{
YUV_FORMATS
,
PIX_FMT_NONE
};
static
enum
PixelFormat
rgb_pix_fmts
[]
=
{
RGB_FORMATS
,
PIX_FMT_NONE
};
static
enum
PixelFormat
all_pix_fmts
[]
=
{
RGB_FORMATS
,
YUV_FORMATS
,
PIX_FMT_NONE
};
static
int
query_formats
(
AVFilterContext
*
ctx
)
{
LutContext
*
lut
=
ctx
->
priv
;
enum
PixelFormat
*
pix_fmts
=
lut
->
is_rgb
?
rgb_pix_fmts
:
lut
->
is_yuv
?
yuv_pix_fmts
:
all_pix_fmts
;
avfilter_set_common_formats
(
ctx
,
avfilter_make_format_list
(
pix_fmts
));
return
0
;
}
static
int
pix_fmt_is_in
(
enum
PixelFormat
pix_fmt
,
enum
PixelFormat
*
pix_fmts
)
{
enum
PixelFormat
*
p
;
for
(
p
=
pix_fmts
;
*
p
!=
PIX_FMT_NONE
;
p
++
)
{
if
(
pix_fmt
==
*
p
)
return
1
;
}
return
0
;
}
/**
* Clip value val in the minval - maxval range.
*/
static
double
clip
(
void
*
opaque
,
double
val
)
{
LutContext
*
lut
=
opaque
;
double
minval
=
lut
->
var_values
[
VAR_MINVAL
];
double
maxval
=
lut
->
var_values
[
VAR_MAXVAL
];
return
av_clip
(
val
,
minval
,
maxval
);
}
/**
* Compute gamma correction for value val, assuming the minval-maxval
* range, val is clipped to a value contained in the same interval.
*/
static
double
compute_gammaval
(
void
*
opaque
,
double
gamma
)
{
LutContext
*
lut
=
opaque
;
double
val
=
lut
->
var_values
[
VAR_CLIPVAL
];
double
minval
=
lut
->
var_values
[
VAR_MINVAL
];
double
maxval
=
lut
->
var_values
[
VAR_MAXVAL
];
return
pow
((
val
-
minval
)
/
(
maxval
-
minval
),
gamma
)
*
(
maxval
-
minval
)
+
minval
;
}
static
double
(
*
const
funcs1
[])(
void
*
,
double
)
=
{
(
void
*
)
clip
,
(
void
*
)
compute_gammaval
,
NULL
};
static
const
char
*
const
funcs1_names
[]
=
{
"clip"
,
"gammaval"
,
NULL
};
static
int
config_props
(
AVFilterLink
*
inlink
)
{
AVFilterContext
*
ctx
=
inlink
->
dst
;
LutContext
*
lut
=
ctx
->
priv
;
const
AVPixFmtDescriptor
*
desc
=
&
av_pix_fmt_descriptors
[
inlink
->
format
];
int
min
[
4
],
max
[
4
];
int
val
,
comp
,
ret
;
lut
->
hsub
=
desc
->
log2_chroma_w
;
lut
->
vsub
=
desc
->
log2_chroma_h
;
lut
->
var_values
[
VAR_W
]
=
inlink
->
w
;
lut
->
var_values
[
VAR_H
]
=
inlink
->
h
;
switch
(
inlink
->
format
)
{
case
PIX_FMT_YUV410P
:
case
PIX_FMT_YUV411P
:
case
PIX_FMT_YUV420P
:
case
PIX_FMT_YUV422P
:
case
PIX_FMT_YUV440P
:
case
PIX_FMT_YUV444P
:
case
PIX_FMT_YUVA420P
:
min
[
Y
]
=
min
[
U
]
=
min
[
V
]
=
16
;
max
[
Y
]
=
235
;
max
[
U
]
=
max
[
V
]
=
240
;
break
;
default:
min
[
0
]
=
min
[
1
]
=
min
[
2
]
=
min
[
3
]
=
0
;
max
[
0
]
=
max
[
1
]
=
max
[
2
]
=
max
[
3
]
=
255
;
}
lut
->
is_yuv
=
lut
->
is_rgb
=
0
;
if
(
pix_fmt_is_in
(
inlink
->
format
,
yuv_pix_fmts
))
lut
->
is_yuv
=
1
;
else
if
(
pix_fmt_is_in
(
inlink
->
format
,
rgb_pix_fmts
))
lut
->
is_rgb
=
1
;
if
(
lut
->
is_rgb
)
{
switch
(
inlink
->
format
)
{
case
PIX_FMT_ARGB
:
lut
->
rgba_map
[
A
]
=
0
;
lut
->
rgba_map
[
R
]
=
1
;
lut
->
rgba_map
[
G
]
=
2
;
lut
->
rgba_map
[
B
]
=
3
;
break
;
case
PIX_FMT_ABGR
:
lut
->
rgba_map
[
A
]
=
0
;
lut
->
rgba_map
[
B
]
=
1
;
lut
->
rgba_map
[
G
]
=
2
;
lut
->
rgba_map
[
R
]
=
3
;
break
;
case
PIX_FMT_RGBA
:
case
PIX_FMT_RGB24
:
lut
->
rgba_map
[
R
]
=
0
;
lut
->
rgba_map
[
G
]
=
1
;
lut
->
rgba_map
[
B
]
=
2
;
lut
->
rgba_map
[
A
]
=
3
;
break
;
case
PIX_FMT_BGRA
:
case
PIX_FMT_BGR24
:
lut
->
rgba_map
[
B
]
=
0
;
lut
->
rgba_map
[
G
]
=
1
;
lut
->
rgba_map
[
R
]
=
2
;
lut
->
rgba_map
[
A
]
=
3
;
break
;
}
lut
->
step
=
av_get_bits_per_pixel
(
desc
)
>>
3
;
}
for
(
comp
=
0
;
comp
<
desc
->
nb_components
;
comp
++
)
{
double
res
;
/* create the parsed expression */
ret
=
av_expr_parse
(
&
lut
->
comp_expr
[
comp
],
lut
->
comp_expr_str
[
comp
],
var_names
,
funcs1_names
,
funcs1
,
NULL
,
NULL
,
0
,
ctx
);
if
(
ret
<
0
)
{
av_log
(
ctx
,
AV_LOG_ERROR
,
"Error when parsing the expression '%s' for the component %d.
\n
"
,
lut
->
comp_expr_str
[
comp
],
comp
);
return
AVERROR
(
EINVAL
);
}
/* compute the lut */
lut
->
var_values
[
VAR_MAXVAL
]
=
max
[
comp
];
lut
->
var_values
[
VAR_MINVAL
]
=
min
[
comp
];
for
(
val
=
0
;
val
<
256
;
val
++
)
{
lut
->
var_values
[
VAR_VAL
]
=
val
;
lut
->
var_values
[
VAR_CLIPVAL
]
=
av_clip
(
val
,
min
[
comp
],
max
[
comp
]);
lut
->
var_values
[
VAR_NEGVAL
]
=
av_clip
(
min
[
comp
]
+
max
[
comp
]
-
lut
->
var_values
[
VAR_VAL
],
min
[
comp
],
max
[
comp
]);
res
=
av_expr_eval
(
lut
->
comp_expr
[
comp
],
lut
->
var_values
,
lut
);
if
(
isnan
(
res
))
{
av_log
(
ctx
,
AV_LOG_ERROR
,
"Error when evaluating the expression '%s' for the value %d for the component #%d.
\n
"
,
lut
->
comp_expr_str
[
comp
],
val
,
comp
);
return
AVERROR
(
EINVAL
);
}
lut
->
lut
[
comp
][
val
]
=
av_clip
((
int
)
res
,
min
[
comp
],
max
[
comp
]);
av_log
(
ctx
,
AV_LOG_DEBUG
,
"val[%d][%d] = %d
\n
"
,
comp
,
val
,
lut
->
lut
[
comp
][
val
]);
}
}
return
0
;
}
static
void
draw_slice
(
AVFilterLink
*
inlink
,
int
y
,
int
h
,
int
slice_dir
)
{
AVFilterContext
*
ctx
=
inlink
->
dst
;
LutContext
*
lut
=
ctx
->
priv
;
AVFilterLink
*
outlink
=
ctx
->
outputs
[
0
];
AVFilterBufferRef
*
inpic
=
inlink
->
cur_buf
;
AVFilterBufferRef
*
outpic
=
outlink
->
out_buf
;
uint8_t
*
inrow
,
*
outrow
;
int
i
,
j
,
k
,
plane
;
if
(
lut
->
is_rgb
)
{
/* packed */
inrow
=
inpic
->
data
[
0
]
+
y
*
inpic
->
linesize
[
0
];
outrow
=
outpic
->
data
[
0
]
+
y
*
outpic
->
linesize
[
0
];
for
(
i
=
0
;
i
<
h
;
i
++
)
{
for
(
j
=
0
;
j
<
inlink
->
w
;
j
++
)
{
for
(
k
=
0
;
k
<
lut
->
step
;
k
++
)
outrow
[
k
]
=
lut
->
lut
[
lut
->
rgba_map
[
k
]][
inrow
[
k
]];
outrow
+=
lut
->
step
;
inrow
+=
lut
->
step
;
}
}
}
else
{
/* planar */
for
(
plane
=
0
;
inpic
->
data
[
plane
];
plane
++
)
{
int
vsub
=
plane
==
1
||
plane
==
2
?
lut
->
vsub
:
0
;
int
hsub
=
plane
==
1
||
plane
==
2
?
lut
->
hsub
:
0
;
inrow
=
inpic
->
data
[
plane
]
+
(
y
>>
vsub
)
*
inpic
->
linesize
[
plane
];
outrow
=
outpic
->
data
[
plane
]
+
(
y
>>
vsub
)
*
outpic
->
linesize
[
plane
];
for
(
i
=
0
;
i
<
h
>>
vsub
;
i
++
)
{
for
(
j
=
0
;
j
<
inlink
->
w
>>
hsub
;
j
++
)
outrow
[
j
]
=
lut
->
lut
[
plane
][
inrow
[
j
]];
inrow
+=
inpic
->
linesize
[
plane
];
outrow
+=
outpic
->
linesize
[
plane
];
}
}
}
avfilter_draw_slice
(
outlink
,
y
,
h
,
slice_dir
);
}
#define DEFINE_LUT_FILTER(name_, description_, init_) \
AVFilter avfilter_vf_##name_ = { \
.name = NULL_IF_CONFIG_SMALL(#name_), \
.description = description_, \
.priv_size = sizeof(LutContext), \
\
.init = init_, \
.uninit = uninit, \
.query_formats = query_formats, \
\
.inputs = (AVFilterPad[]) {{ .name = "default", \
.type = AVMEDIA_TYPE_VIDEO, \
.draw_slice = draw_slice, \
.config_props = config_props, \
.min_perms = AV_PERM_READ, }, \
{ .name = NULL}}, \
.outputs = (AVFilterPad[]) {{ .name = "default", \
.type = AVMEDIA_TYPE_VIDEO, }, \
{ .name = NULL}}, \
}
DEFINE_LUT_FILTER
(
lut
,
"Compute and apply a lookup table to the RGB/YUV input video."
,
init
);
DEFINE_LUT_FILTER
(
lutyuv
,
"Compute and apply a lookup table to the YUV input video."
,
init
);
DEFINE_LUT_FILTER
(
lutrgb
,
"Compute and apply a lookup table to the RGB input video."
,
init
);
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment