Commit 875f2388 authored by Alexander Alekhin's avatar Alexander Alekhin

Merge pull request #11942 from catree:add_tutorial_core_java_python

parents d0a36868 c9fe6f1a
......@@ -53,48 +53,143 @@ Theory
Code
----
@add_toggle_cpp
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp)
- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
@include BasicLinearTransforms.cpp
@include samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp
@end_toggle
@add_toggle_java
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java)
- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
@include samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java
@end_toggle
@add_toggle_python
- **Downloadable code**: Click
[here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py)
- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
@include samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py
@end_toggle
Explanation
-----------
-# We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
@snippet BasicLinearTransforms.cpp basic-linear-transform-parameters
- We load an image using @ref cv::imread and save it in a Mat object:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-load
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-load
@end_toggle
-# We load an image using @ref cv::imread and save it in a Mat object:
@snippet BasicLinearTransforms.cpp basic-linear-transform-load
-# Now, since we will make some transformations to this image, we need a new Mat object to store
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-load
@end_toggle
- Now, since we will make some transformations to this image, we need a new Mat object to store
it. Also, we want this to have the following features:
- Initial pixel values equal to zero
- Same size and type as the original image
@snippet BasicLinearTransforms.cpp basic-linear-transform-output
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
*image.size()* and *image.type()*
-# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-output
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-output
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-output
@end_toggle
We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
*image.size()* and *image.type()*
- We ask now the values of \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-parameters
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-parameters
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-parameters
@end_toggle
- Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
pixel in image. Since we are operating with BGR images, we will have three values per pixel (B,
G and R), so we will also access them separately. Here is the piece of code:
@snippet BasicLinearTransforms.cpp basic-linear-transform-operation
Notice the following:
- To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
- Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
values are valid.
-# Finally, we create windows and show the images, the usual way.
@snippet BasicLinearTransforms.cpp basic-linear-transform-display
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-operation
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-operation
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-operation
@end_toggle
Notice the following (**C++ code only**):
- To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
- Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
values are valid.
- Finally, we create windows and show the images, the usual way.
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-display
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-display
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-display
@end_toggle
@note
Instead of using the **for** loops to access each pixel, we could have simply used this command:
@code{.cpp}
image.convertTo(new_image, -1, alpha, beta);
@endcode
where @ref cv::Mat::convertTo would effectively perform *new_image = a*image + beta\*. However, we
wanted to show you how to access each pixel. In any case, both methods give the same result but
convertTo is more optimized and works a lot faster.
@add_toggle_cpp
@code{.cpp}
image.convertTo(new_image, -1, alpha, beta);
@endcode
@end_toggle
@add_toggle_java
@code{.java}
image.convertTo(newImage, -1, alpha, beta);
@endcode
@end_toggle
@add_toggle_python
@code{.py}
new_image = cv.convertScaleAbs(image, alpha=alpha, beta=beta)
@endcode
@end_toggle
where @ref cv::Mat::convertTo would effectively perform *new_image = a*image + beta\*. However, we
wanted to show you how to access each pixel. In any case, both methods give the same result but
convertTo is more optimized and works a lot faster.
Result
------
......@@ -185,10 +280,31 @@ and are not intended to be used as a replacement of a raster graphics editor!**
### Code
@add_toggle_cpp
Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/cpp/tutorial_code/ImgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.cpp).
@end_toggle
@add_toggle_java
Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/ChangingContrastBrightnessImageDemo.java).
@end_toggle
@add_toggle_python
Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.py).
@end_toggle
Code for the gamma correction:
@snippet changing_contrast_brightness_image.cpp changing-contrast-brightness-gamma-correction
@add_toggle_cpp
@snippet samples/cpp/tutorial_code/ImgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.cpp changing-contrast-brightness-gamma-correction
@end_toggle
@add_toggle_java
@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/ChangingContrastBrightnessImageDemo.java changing-contrast-brightness-gamma-correction
@end_toggle
@add_toggle_python
@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.py changing-contrast-brightness-gamma-correction
@end_toggle
A look-up table is used to improve the performance of the computation as only 256 values needs to be calculated once.
......
This diff is collapsed.
......@@ -36,6 +36,10 @@ understanding how to manipulate the images on a pixel level.
- @subpage tutorial_mat_operations
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
Reading/writing images from file, accessing pixels, primitive operations, visualizing images.
- @subpage tutorial_adding_images
......@@ -50,6 +54,8 @@ understanding how to manipulate the images on a pixel level.
- @subpage tutorial_basic_linear_transform
*Languages:* C++, Java, Python
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
......
......@@ -20,29 +20,32 @@ using namespace cv;
*/
int main( int argc, char** argv )
{
//! [basic-linear-transform-parameters]
double alpha = 1.0; /*< Simple contrast control */
int beta = 0; /*< Simple brightness control */
//! [basic-linear-transform-parameters]
/// Read image given by user
//! [basic-linear-transform-load]
String imageName("../data/lena.jpg"); // by default
if (argc > 1)
CommandLineParser parser( argc, argv, "{@input | ../data/lena.jpg | input image}" );
Mat image = imread( parser.get<String>( "@input" ) );
if( image.empty() )
{
imageName = argv[1];
cout << "Could not open or find the image!\n" << endl;
cout << "Usage: " << argv[0] << " <Input image>" << endl;
return -1;
}
Mat image = imread( imageName );
//! [basic-linear-transform-load]
//! [basic-linear-transform-output]
Mat new_image = Mat::zeros( image.size(), image.type() );
//! [basic-linear-transform-output]
//! [basic-linear-transform-parameters]
double alpha = 1.0; /*< Simple contrast control */
int beta = 0; /*< Simple brightness control */
/// Initialize values
cout << " Basic Linear Transforms " << endl;
cout << "-------------------------" << endl;
cout << "* Enter the alpha value [1.0-3.0]: "; cin >> alpha;
cout << "* Enter the beta value [0-100]: "; cin >> beta;
//! [basic-linear-transform-parameters]
/// Do the operation new_image(i,j) = alpha*image(i,j) + beta
/// Instead of these 'for' loops we could have used simply:
......@@ -51,19 +54,15 @@ int main( int argc, char** argv )
//! [basic-linear-transform-operation]
for( int y = 0; y < image.rows; y++ ) {
for( int x = 0; x < image.cols; x++ ) {
for( int c = 0; c < 3; c++ ) {
for( int c = 0; c < image.channels(); c++ ) {
new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
saturate_cast<uchar>( alpha*image.at<Vec3b>(y,x)[c] + beta );
}
}
}
//! [basic-linear-transform-operation]
//! [basic-linear-transform-display]
/// Create Windows
namedWindow("Original Image", WINDOW_AUTOSIZE);
namedWindow("New Image", WINDOW_AUTOSIZE);
/// Show stuff
imshow("Original Image", image);
imshow("New Image", new_image);
......
......@@ -3,6 +3,8 @@
#include "opencv2/highgui.hpp"
// we're NOT "using namespace std;" here, to avoid collisions between the beta variable and std::beta in c++17
using std::cout;
using std::endl;
using namespace cv;
namespace
......@@ -19,12 +21,13 @@ void basicLinearTransform(const Mat &img, const double alpha_, const int beta_)
img.convertTo(res, -1, alpha_, beta_);
hconcat(img, res, img_corrected);
imshow("Brightness and contrast adjustments", img_corrected);
}
void gammaCorrection(const Mat &img, const double gamma_)
{
CV_Assert(gamma_ >= 0);
//![changing-contrast-brightness-gamma-correction]
//! [changing-contrast-brightness-gamma-correction]
Mat lookUpTable(1, 256, CV_8U);
uchar* p = lookUpTable.ptr();
for( int i = 0; i < 256; ++i)
......@@ -32,9 +35,10 @@ void gammaCorrection(const Mat &img, const double gamma_)
Mat res = img.clone();
LUT(img, lookUpTable, res);
//![changing-contrast-brightness-gamma-correction]
//! [changing-contrast-brightness-gamma-correction]
hconcat(img, res, img_gamma_corrected);
imshow("Gamma correction", img_gamma_corrected);
}
void on_linear_transform_alpha_trackbar(int, void *)
......@@ -60,36 +64,32 @@ void on_gamma_correction_trackbar(int, void *)
int main( int argc, char** argv )
{
String imageName("../data/lena.jpg"); // by default
if (argc > 1)
CommandLineParser parser( argc, argv, "{@input | ../data/lena.jpg | input image}" );
img_original = imread( parser.get<String>( "@input" ) );
if( img_original.empty() )
{
imageName = argv[1];
cout << "Could not open or find the image!\n" << endl;
cout << "Usage: " << argv[0] << " <Input image>" << endl;
return -1;
}
img_original = imread( imageName );
img_corrected = Mat(img_original.rows, img_original.cols*2, img_original.type());
img_gamma_corrected = Mat(img_original.rows, img_original.cols*2, img_original.type());
hconcat(img_original, img_original, img_corrected);
hconcat(img_original, img_original, img_gamma_corrected);
namedWindow("Brightness and contrast adjustments", WINDOW_AUTOSIZE);
namedWindow("Gamma correction", WINDOW_AUTOSIZE);
namedWindow("Brightness and contrast adjustments");
namedWindow("Gamma correction");
createTrackbar("Alpha gain (contrast)", "Brightness and contrast adjustments", &alpha, 500, on_linear_transform_alpha_trackbar);
createTrackbar("Beta bias (brightness)", "Brightness and contrast adjustments", &beta, 200, on_linear_transform_beta_trackbar);
createTrackbar("Gamma correction", "Gamma correction", &gamma_cor, 200, on_gamma_correction_trackbar);
while (true)
{
imshow("Brightness and contrast adjustments", img_corrected);
imshow("Gamma correction", img_gamma_corrected);
on_linear_transform_alpha_trackbar(0, 0);
on_gamma_correction_trackbar(0, 0);
int c = waitKey(30);
if (c == 27)
break;
}
waitKey();
imwrite("linear_transform_correction.png", img_corrected);
imwrite("gamma_correction.png", img_gamma_corrected);
......
/* Snippet code for Operations with images tutorial (not intended to be run but should built successfully) */
#include "opencv2/core.hpp"
#include "opencv2/core/core_c.h"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int,char**)
{
std::string filename = "";
// Input/Output
{
//! [Load an image from a file]
Mat img = imread(filename);
//! [Load an image from a file]
CV_UNUSED(img);
}
{
//! [Load an image from a file in grayscale]
Mat img = imread(filename, IMREAD_GRAYSCALE);
//! [Load an image from a file in grayscale]
CV_UNUSED(img);
}
{
Mat img(4,4,CV_8U);
//! [Save image]
imwrite(filename, img);
//! [Save image]
}
// Accessing pixel intensity values
{
Mat img(4,4,CV_8U);
int y = 0, x = 0;
{
//! [Pixel access 1]
Scalar intensity = img.at<uchar>(y, x);
//! [Pixel access 1]
CV_UNUSED(intensity);
}
{
//! [Pixel access 2]
Scalar intensity = img.at<uchar>(Point(x, y));
//! [Pixel access 2]
CV_UNUSED(intensity);
}
{
//! [Pixel access 3]
Vec3b intensity = img.at<Vec3b>(y, x);
uchar blue = intensity.val[0];
uchar green = intensity.val[1];
uchar red = intensity.val[2];
//! [Pixel access 3]
CV_UNUSED(blue);
CV_UNUSED(green);
CV_UNUSED(red);
}
{
//! [Pixel access 4]
Vec3f intensity = img.at<Vec3f>(y, x);
float blue = intensity.val[0];
float green = intensity.val[1];
float red = intensity.val[2];
//! [Pixel access 4]
CV_UNUSED(blue);
CV_UNUSED(green);
CV_UNUSED(red);
}
{
//! [Pixel access 5]
img.at<uchar>(y, x) = 128;
//! [Pixel access 5]
}
{
int i = 0;
//! [Mat from points vector]
vector<Point2f> points;
//... fill the array
Mat pointsMat = Mat(points);
//! [Mat from points vector]
//! [Point access]
Point2f point = pointsMat.at<Point2f>(i, 0);
//! [Point access]
CV_UNUSED(point);
}
}
// Memory management and reference counting
{
//! [Reference counting 1]
std::vector<Point3f> points;
// .. fill the array
Mat pointsMat = Mat(points).reshape(1);
//! [Reference counting 1]
CV_UNUSED(pointsMat);
}
{
//! [Reference counting 2]
Mat img = imread("image.jpg");
Mat img1 = img.clone();
//! [Reference counting 2]
CV_UNUSED(img1);
}
{
//! [Reference counting 3]
Mat img = imread("image.jpg");
Mat sobelx;
Sobel(img, sobelx, CV_32F, 1, 0);
//! [Reference counting 3]
}
// Primitive operations
{
Mat img;
{
//! [Set image to black]
img = Scalar(0);
//! [Set image to black]
}
{
//! [Select ROI]
Rect r(10, 10, 100, 100);
Mat smallImg = img(r);
//! [Select ROI]
CV_UNUSED(smallImg);
}
}
{
//! [C-API conversion]
Mat img = imread("image.jpg");
IplImage img1 = img;
CvMat m = img;
//! [C-API conversion]
CV_UNUSED(img1);
CV_UNUSED(m);
}
{
//! [BGR to Gray]
Mat img = imread("image.jpg"); // loading a 8UC3 image
Mat grey;
cvtColor(img, grey, COLOR_BGR2GRAY);
//! [BGR to Gray]
}
{
Mat dst, src;
//! [Convert to CV_32F]
src.convertTo(dst, CV_32F);
//! [Convert to CV_32F]
}
// Visualizing images
{
//! [imshow 1]
Mat img = imread("image.jpg");
namedWindow("image", WINDOW_AUTOSIZE);
imshow("image", img);
waitKey();
//! [imshow 1]
}
{
//! [imshow 2]
Mat img = imread("image.jpg");
Mat grey;
cvtColor(img, grey, COLOR_BGR2GRAY);
Mat sobelx;
Sobel(grey, sobelx, CV_32F, 1, 0);
double minVal, maxVal;
minMaxLoc(sobelx, &minVal, &maxVal); //find minimum and maximum intensities
Mat draw;
sobelx.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
namedWindow("image", WINDOW_AUTOSIZE);
imshow("image", draw);
waitKey();
//! [imshow 2]
}
return 0;
}
import java.util.Scanner;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
class BasicLinearTransforms {
private byte saturate(double val) {
int iVal = (int) Math.round(val);
iVal = iVal > 255 ? 255 : (iVal < 0 ? 0 : iVal);
return (byte) iVal;
}
public void run(String[] args) {
/// Read image given by user
//! [basic-linear-transform-load]
String imagePath = args.length > 0 ? args[0] : "../data/lena.jpg";
Mat image = Imgcodecs.imread(imagePath);
if (image.empty()) {
System.out.println("Empty image: " + imagePath);
System.exit(0);
}
//! [basic-linear-transform-load]
//! [basic-linear-transform-output]
Mat newImage = Mat.zeros(image.size(), image.type());
//! [basic-linear-transform-output]
//! [basic-linear-transform-parameters]
double alpha = 1.0; /*< Simple contrast control */
int beta = 0; /*< Simple brightness control */
/// Initialize values
System.out.println(" Basic Linear Transforms ");
System.out.println("-------------------------");
try (Scanner scanner = new Scanner(System.in)) {
System.out.print("* Enter the alpha value [1.0-3.0]: ");
alpha = scanner.nextDouble();
System.out.print("* Enter the beta value [0-100]: ");
beta = scanner.nextInt();
}
//! [basic-linear-transform-parameters]
/// Do the operation newImage(i,j) = alpha*image(i,j) + beta
/// Instead of these 'for' loops we could have used simply:
/// image.convertTo(newImage, -1, alpha, beta);
/// but we wanted to show you how to access the pixels :)
//! [basic-linear-transform-operation]
byte[] imageData = new byte[(int) (image.total()*image.channels())];
image.get(0, 0, imageData);
byte[] newImageData = new byte[(int) (newImage.total()*newImage.channels())];
for (int y = 0; y < image.rows(); y++) {
for (int x = 0; x < image.cols(); x++) {
for (int c = 0; c < image.channels(); c++) {
double pixelValue = imageData[(y * image.cols() + x) * image.channels() + c];
/// Java byte range is [-128, 127]
pixelValue = pixelValue < 0 ? pixelValue + 256 : pixelValue;
newImageData[(y * image.cols() + x) * image.channels() + c]
= saturate(alpha * pixelValue + beta);
}
}
}
newImage.put(0, 0, newImageData);
//! [basic-linear-transform-operation]
//! [basic-linear-transform-display]
/// Show stuff
HighGui.imshow("Original Image", image);
HighGui.imshow("New Image", newImage);
/// Wait until user press some key
HighGui.waitKey();
//! [basic-linear-transform-display]
System.exit(0);
}
}
public class BasicLinearTransformsDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new BasicLinearTransforms().run(args);
}
}
import java.awt.BorderLayout;
import java.awt.Container;
import java.awt.Image;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.BoxLayout;
import javax.swing.ImageIcon;
import javax.swing.JCheckBox;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JSlider;
import javax.swing.event.ChangeEvent;
import javax.swing.event.ChangeListener;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
class ChangingContrastBrightnessImage {
private static int MAX_VALUE_ALPHA = 500;
private static int MAX_VALUE_BETA_GAMMA = 200;
private static final String WINDOW_NAME = "Changing the contrast and brightness of an image demo";
private static final String ALPHA_NAME = "Alpha gain (contrast)";
private static final String BETA_NAME = "Beta bias (brightness)";
private static final String GAMMA_NAME = "Gamma correction";
private JFrame frame;
private Mat matImgSrc = new Mat();
private JLabel imgSrcLabel;
private JLabel imgModifLabel;
private JPanel controlPanel;
private JPanel alphaBetaPanel;
private JPanel gammaPanel;
private double alphaValue = 1.0;
private double betaValue = 0.0;
private double gammaValue = 1.0;
private JCheckBox methodCheckBox;
private JSlider sliderAlpha;
private JSlider sliderBeta;
private JSlider sliderGamma;
public ChangingContrastBrightnessImage(String[] args) {
String imagePath = args.length > 0 ? args[0] : "../data/lena.jpg";
matImgSrc = Imgcodecs.imread(imagePath);
if (matImgSrc.empty()) {
System.out.println("Empty image: " + imagePath);
System.exit(0);
}
// Create and set up the window.
frame = new JFrame(WINDOW_NAME);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
// Set up the content pane.
Image img = HighGui.toBufferedImage(matImgSrc);
addComponentsToPane(frame.getContentPane(), img);
// Use the content pane's default BorderLayout. No need for
// setLayout(new BorderLayout());
// Display the window.
frame.pack();
frame.setVisible(true);
}
private void addComponentsToPane(Container pane, Image img) {
if (!(pane.getLayout() instanceof BorderLayout)) {
pane.add(new JLabel("Container doesn't use BorderLayout!"));
return;
}
controlPanel = new JPanel();
controlPanel.setLayout(new BoxLayout(controlPanel, BoxLayout.PAGE_AXIS));
methodCheckBox = new JCheckBox("Do gamma correction");
methodCheckBox.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
JCheckBox cb = (JCheckBox) e.getSource();
if (cb.isSelected()) {
controlPanel.remove(alphaBetaPanel);
controlPanel.add(gammaPanel);
performGammaCorrection();
frame.revalidate();
frame.repaint();
frame.pack();
} else {
controlPanel.remove(gammaPanel);
controlPanel.add(alphaBetaPanel);
performLinearTransformation();
frame.revalidate();
frame.repaint();
frame.pack();
}
}
});
controlPanel.add(methodCheckBox);
alphaBetaPanel = new JPanel();
alphaBetaPanel.setLayout(new BoxLayout(alphaBetaPanel, BoxLayout.PAGE_AXIS));
alphaBetaPanel.add(new JLabel(ALPHA_NAME));
sliderAlpha = new JSlider(0, MAX_VALUE_ALPHA, 100);
sliderAlpha.setMajorTickSpacing(50);
sliderAlpha.setMinorTickSpacing(10);
sliderAlpha.setPaintTicks(true);
sliderAlpha.setPaintLabels(true);
sliderAlpha.addChangeListener(new ChangeListener() {
@Override
public void stateChanged(ChangeEvent e) {
alphaValue = sliderAlpha.getValue() / 100.0;
performLinearTransformation();
}
});
alphaBetaPanel.add(sliderAlpha);
alphaBetaPanel.add(new JLabel(BETA_NAME));
sliderBeta = new JSlider(0, MAX_VALUE_BETA_GAMMA, 100);
sliderBeta.setMajorTickSpacing(20);
sliderBeta.setMinorTickSpacing(5);
sliderBeta.setPaintTicks(true);
sliderBeta.setPaintLabels(true);
sliderBeta.addChangeListener(new ChangeListener() {
@Override
public void stateChanged(ChangeEvent e) {
betaValue = sliderBeta.getValue() - 100;
performLinearTransformation();
}
});
alphaBetaPanel.add(sliderBeta);
controlPanel.add(alphaBetaPanel);
gammaPanel = new JPanel();
gammaPanel.setLayout(new BoxLayout(gammaPanel, BoxLayout.PAGE_AXIS));
gammaPanel.add(new JLabel(GAMMA_NAME));
sliderGamma = new JSlider(0, MAX_VALUE_BETA_GAMMA, 100);
sliderGamma.setMajorTickSpacing(20);
sliderGamma.setMinorTickSpacing(5);
sliderGamma.setPaintTicks(true);
sliderGamma.setPaintLabels(true);
sliderGamma.addChangeListener(new ChangeListener() {
@Override
public void stateChanged(ChangeEvent e) {
gammaValue = sliderGamma.getValue() / 100.0;
performGammaCorrection();
}
});
gammaPanel.add(sliderGamma);
pane.add(controlPanel, BorderLayout.PAGE_START);
JPanel framePanel = new JPanel();
imgSrcLabel = new JLabel(new ImageIcon(img));
framePanel.add(imgSrcLabel);
imgModifLabel = new JLabel(new ImageIcon(img));
framePanel.add(imgModifLabel);
pane.add(framePanel, BorderLayout.CENTER);
}
private void performLinearTransformation() {
Mat img = new Mat();
matImgSrc.convertTo(img, -1, alphaValue, betaValue);
imgModifLabel.setIcon(new ImageIcon(HighGui.toBufferedImage(img)));
frame.repaint();
}
private byte saturate(double val) {
int iVal = (int) Math.round(val);
iVal = iVal > 255 ? 255 : (iVal < 0 ? 0 : iVal);
return (byte) iVal;
}
private void performGammaCorrection() {
//! [changing-contrast-brightness-gamma-correction]
Mat lookUpTable = new Mat(1, 256, CvType.CV_8U);
byte[] lookUpTableData = new byte[(int) (lookUpTable.total()*lookUpTable.channels())];
for (int i = 0; i < lookUpTable.cols(); i++) {
lookUpTableData[i] = saturate(Math.pow(i / 255.0, gammaValue) * 255.0);
}
lookUpTable.put(0, 0, lookUpTableData);
Mat img = new Mat();
Core.LUT(matImgSrc, lookUpTable, img);
//! [changing-contrast-brightness-gamma-correction]
imgModifLabel.setIcon(new ImageIcon(HighGui.toBufferedImage(img)));
frame.repaint();
}
}
public class ChangingContrastBrightnessImageDemo {
public static void main(String[] args) {
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// Schedule a job for the event dispatch thread:
// creating and showing this application's GUI.
javax.swing.SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new ChangingContrastBrightnessImage(args);
}
});
}
}
import java.util.Arrays;
import org.opencv.core.Core;
import org.opencv.core.Core.MinMaxLocResult;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Rect;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
public class MatOperations {
@SuppressWarnings("unused")
public static void main(String[] args) {
/* Snippet code for Operations with images tutorial (not intended to be run) */
// Load the native OpenCV library
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
String filename = "";
// Input/Output
{
//! [Load an image from a file]
Mat img = Imgcodecs.imread(filename);
//! [Load an image from a file]
}
{
//! [Load an image from a file in grayscale]
Mat img = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
//! [Load an image from a file in grayscale]
}
{
Mat img = new Mat(4, 4, CvType.CV_8U);
//! [Save image]
Imgcodecs.imwrite(filename, img);
//! [Save image]
}
// Accessing pixel intensity values
{
Mat img = new Mat(4, 4, CvType.CV_8U);
int y = 0, x = 0;
{
//! [Pixel access 1]
byte[] imgData = new byte[(int) (img.total() * img.channels())];
img.get(0, 0, imgData);
byte intensity = imgData[y * img.cols() + x];
//! [Pixel access 1]
}
{
//! [Pixel access 5]
byte[] imgData = new byte[(int) (img.total() * img.channels())];
imgData[y * img.cols() + x] = (byte) 128;
img.put(0, 0, imgData);
//! [Pixel access 5]
}
}
// Memory management and reference counting
{
//! [Reference counting 2]
Mat img = Imgcodecs.imread("image.jpg");
Mat img1 = img.clone();
//! [Reference counting 2]
}
{
//! [Reference counting 3]
Mat img = Imgcodecs.imread("image.jpg");
Mat sobelx = new Mat();
Imgproc.Sobel(img, sobelx, CvType.CV_32F, 1, 0);
//! [Reference counting 3]
}
// Primitive operations
{
Mat img = new Mat(400, 400, CvType.CV_8UC3);
{
//! [Set image to black]
byte[] imgData = new byte[(int) (img.total() * img.channels())];
Arrays.fill(imgData, (byte) 0);
img.put(0, 0, imgData);
//! [Set image to black]
}
{
//! [Select ROI]
Rect r = new Rect(10, 10, 100, 100);
Mat smallImg = img.submat(r);
//! [Select ROI]
}
}
{
//! [BGR to Gray]
Mat img = Imgcodecs.imread("image.jpg"); // loading a 8UC3 image
Mat grey = new Mat();
Imgproc.cvtColor(img, grey, Imgproc.COLOR_BGR2GRAY);
//! [BGR to Gray]
}
{
Mat dst = new Mat(), src = new Mat();
//! [Convert to CV_32F]
src.convertTo(dst, CvType.CV_32F);
//! [Convert to CV_32F]
}
// Visualizing images
{
//! [imshow 1]
Mat img = Imgcodecs.imread("image.jpg");
HighGui.namedWindow("image", HighGui.WINDOW_AUTOSIZE);
HighGui.imshow("image", img);
HighGui.waitKey();
//! [imshow 1]
}
{
//! [imshow 2]
Mat img = Imgcodecs.imread("image.jpg");
Mat grey = new Mat();
Imgproc.cvtColor(img, grey, Imgproc.COLOR_BGR2GRAY);
Mat sobelx = new Mat();
Imgproc.Sobel(grey, sobelx, CvType.CV_32F, 1, 0);
MinMaxLocResult res = Core.minMaxLoc(sobelx); // find minimum and maximum intensities
Mat draw = new Mat();
double maxVal = res.maxVal, minVal = res.minVal;
sobelx.convertTo(draw, CvType.CV_8U, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal));
HighGui.namedWindow("image", HighGui.WINDOW_AUTOSIZE);
HighGui.imshow("image", draw);
HighGui.waitKey();
//! [imshow 2]
}
System.exit(0);
}
}
from __future__ import division
import cv2 as cv
import numpy as np
# Snippet code for Operations with images tutorial (not intended to be run)
def load():
# Input/Output
filename = 'img.jpg'
## [Load an image from a file]
img = cv.imread(filename)
## [Load an image from a file]
## [Load an image from a file in grayscale]
img = cv.imread(filename, cv.IMREAD_GRAYSCALE)
## [Load an image from a file in grayscale]
## [Save image]
cv.imwrite(filename, img)
## [Save image]
def access_pixel():
# Accessing pixel intensity values
img = np.empty((4,4,3), np.uint8)
y = 0
x = 0
## [Pixel access 1]
intensity = img[y,x]
## [Pixel access 1]
## [Pixel access 3]
blue = img[y,x,0]
green = img[y,x,1]
red = img[y,x,2]
## [Pixel access 3]
## [Pixel access 5]
img[y,x] = 128
## [Pixel access 5]
def reference_counting():
# Memory management and reference counting
## [Reference counting 2]
img = cv.imread('image.jpg')
img1 = np.copy(img)
## [Reference counting 2]
## [Reference counting 3]
img = cv.imread('image.jpg')
sobelx = cv.Sobel(img, cv.CV_32F, 1, 0);
## [Reference counting 3]
def primitive_operations():
img = np.empty((4,4,3), np.uint8)
## [Set image to black]
img[:] = 0
## [Set image to black]
## [Select ROI]
smallImg = img[10:110,10:110]
## [Select ROI]
## [BGR to Gray]
img = cv.imread('image.jpg')
grey = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
## [BGR to Gray]
src = np.ones((4,4), np.uint8)
## [Convert to CV_32F]
dst = src.astype(np.float32)
## [Convert to CV_32F]
def visualize_images():
## [imshow 1]
img = cv.imread('image.jpg')
cv.namedWindow('image', cv.WINDOW_AUTOSIZE)
cv.imshow('image', img)
cv.waitKey()
## [imshow 1]
## [imshow 2]
img = cv.imread('image.jpg')
grey = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
sobelx = cv.Sobel(grey, cv.CV_32F, 1, 0)
# find minimum and maximum intensities
minVal = np.amin(sobelx)
maxVal = np.amax(sobelx)
draw = cv.convertScaleAbs(sobelx, alpha=255.0/(maxVal - minVal), beta=-minVal * 255.0/(maxVal - minVal))
cv.namedWindow('image', cv.WINDOW_AUTOSIZE)
cv.imshow('image', draw)
cv.waitKey()
## [imshow 2]
from __future__ import print_function
from builtins import input
import cv2 as cv
import numpy as np
import argparse
# Read image given by user
## [basic-linear-transform-load]
parser = argparse.ArgumentParser(description='Code for Changing the contrast and brightness of an image! tutorial.')
parser.add_argument('--input', help='Path to input image.', default='../data/lena.jpg')
args = parser.parse_args()
image = cv.imread(args.input)
if image is None:
print('Could not open or find the image: ', args.input)
exit(0)
## [basic-linear-transform-load]
## [basic-linear-transform-output]
new_image = np.zeros(image.shape, image.dtype)
## [basic-linear-transform-output]
## [basic-linear-transform-parameters]
alpha = 1.0 # Simple contrast control
beta = 0 # Simple brightness control
# Initialize values
print(' Basic Linear Transforms ')
print('-------------------------')
try:
alpha = float(input('* Enter the alpha value [1.0-3.0]: '))
beta = int(input('* Enter the beta value [0-100]: '))
except ValueError:
print('Error, not a number')
## [basic-linear-transform-parameters]
# Do the operation new_image(i,j) = alpha*image(i,j) + beta
# Instead of these 'for' loops we could have used simply:
# new_image = cv.convertScaleAbs(image, alpha=alpha, beta=beta)
# but we wanted to show you how to access the pixels :)
## [basic-linear-transform-operation]
for y in range(image.shape[0]):
for x in range(image.shape[1]):
for c in range(image.shape[2]):
new_image[y,x,c] = np.clip(alpha*image[y,x,c] + beta, 0, 255)
## [basic-linear-transform-operation]
## [basic-linear-transform-display]
# Show stuff
cv.imshow('Original Image', image)
cv.imshow('New Image', new_image)
# Wait until user press some key
cv.waitKey()
## [basic-linear-transform-display]
from __future__ import print_function
from __future__ import division
import cv2 as cv
import numpy as np
import argparse
alpha = 1.0
alpha_max = 500
beta = 0
beta_max = 200
gamma = 1.0
gamma_max = 200
def basicLinearTransform():
res = cv.convertScaleAbs(img_original, alpha=alpha, beta=beta)
img_corrected = cv.hconcat([img_original, res])
cv.imshow("Brightness and contrast adjustments", img_corrected)
def gammaCorrection():
## [changing-contrast-brightness-gamma-correction]
lookUpTable = np.empty((1,256), np.uint8)
for i in range(256):
lookUpTable[0,i] = np.clip(pow(i / 255.0, gamma) * 255.0, 0, 255)
res = cv.LUT(img_original, lookUpTable)
## [changing-contrast-brightness-gamma-correction]
img_gamma_corrected = cv.hconcat([img_original, res]);
cv.imshow("Gamma correction", img_gamma_corrected);
def on_linear_transform_alpha_trackbar(val):
global alpha
alpha = val / 100
basicLinearTransform()
def on_linear_transform_beta_trackbar(val):
global beta
beta = val - 100
basicLinearTransform()
def on_gamma_correction_trackbar(val):
global gamma
gamma = val / 100
gammaCorrection()
parser = argparse.ArgumentParser(description='Code for Changing the contrast and brightness of an image! tutorial.')
parser.add_argument('--input', help='Path to input image.', default='../data/lena.jpg')
args = parser.parse_args()
img_original = cv.imread(args.input)
if img_original is None:
print('Could not open or find the image: ', args.input)
exit(0)
img_corrected = np.empty((img_original.shape[0], img_original.shape[1]*2, img_original.shape[2]), img_original.dtype)
img_gamma_corrected = np.empty((img_original.shape[0], img_original.shape[1]*2, img_original.shape[2]), img_original.dtype)
img_corrected = cv.hconcat([img_original, img_original])
img_gamma_corrected = cv.hconcat([img_original, img_original])
cv.namedWindow('Brightness and contrast adjustments')
cv.namedWindow('Gamma correction')
alpha_init = int(alpha *100)
cv.createTrackbar('Alpha gain (contrast)', 'Brightness and contrast adjustments', alpha_init, alpha_max, on_linear_transform_alpha_trackbar)
beta_init = beta + 100
cv.createTrackbar('Beta bias (brightness)', 'Brightness and contrast adjustments', beta_init, beta_max, on_linear_transform_beta_trackbar)
gamma_init = int(gamma * 100)
cv.createTrackbar('Gamma correction', 'Gamma correction', gamma_init, gamma_max, on_gamma_correction_trackbar)
on_linear_transform_alpha_trackbar(alpha_init)
on_gamma_correction_trackbar(gamma_init)
cv.waitKey()
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment