This post is about, how to implement Barcode scanning using Google ML Kit’s Barcode Scanning API and Jetpack CameraX with the help of simple application.
CameraX
CameraX is a Jetpack support library, built to help you make camera app development easier.
It is based on use cases that is lifecycle-aware. These use cases work across all devices running Android 5.0 (API level 21) or higher, ensuring that the same code works on most devices.
CameraX introduces the following use cases:
- Preview: get an image on the display.
- Image analysis: access a buffer seamlessly for use in your algorithms, such as to pass into ML Kit (we will use it to detect barcode).
- Image capture: save high-quality images.
Google ML Kit
ML Kit is a cross-platform mobile SDK (Android and iOS) developed by Google that allows developers to easily access on-device mobile machine learning models.
All the ML Kit’s APIs run on-device, allowing real-time and offline capabilities.
ML Kit’s Barcode Scanning API
ML Kit’s barcode scanning API, allows you to recognize and decode barcodes.
Key Feature
- It reads most standard formats including Codabar, Code 39, Code 93, EAN-8, EAN-13, QR code, PDF417, and more.
- Automatically Scan for all supported barcode formats.
- Barcodes are recognized and scanned regardless of their orientation(right-side-up, upside-down, or sideways).
- Barcode scanning happens on the device, and doesn’t require a network connection.
There are two ways to integrate barcode scanning in your app:
- bundled model : part of your application.
- unbundled model : depends on Google Play Services.
Creating new project
1 . Create a new project by going to File ⇒ New Android Project, select Empty Activity , provide app name, select language to kotlin and then finally click on finish.
2 . Open the build.gradle (Module: app) file and add the below dependencies inside the dependencies section:
build.gradle
dependencies { var camerax_version = "1.0.2" // ViewModel and LiveData implementation "androidx.lifecycle:lifecycle-livedata:2.4.0" implementation "androidx.lifecycle:lifecycle-viewmodel:2.4.0" // Use this dependency for bundled model implementation 'com.google.mlkit:barcode-scanning:17.0.0' // CameraX library implementation("androidx.camera:camera-camera2:${camerax_version}") // CameraX Lifecycle library implementation("androidx.camera:camera-lifecycle:${camerax_version}") // CameraX View class implementation("androidx.camera:camera-view:1.0.0-alpha30") }
3 . Open your AndroidManifest.xml file and add camera permission and camera hardware feature above <application> tag.
<uses-feature android:name="android.hardware.camera"/> <uses-permission android:name="android.permission.CAMERA" />
4. Add PreviewView and a TextView (to show scanned data) to the main activity layout (activity_main.xml).
PreviewView : Custom View that displays the camera feed for CameraX’s Preview use case.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.camera.view.PreviewView
android:id="@+id/preview_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<TextView
android:id="@+id/tvScannedData"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginBottom="50dp"
android:gravity="center"
android:textColor="@android:color/white"
android:textSize="20sp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.5"
app:layout_constraintStart_toStartOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
5 . Now we need to check camera permission, here’s a code to check if camera permission was granted and request it.
class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) if (isCameraPermissionGranted()) { // startCamera } else { ActivityCompat.requestPermissions( this, arrayOf(Manifest.permission.CAMERA), PERMISSION_CAMERA_REQUEST ) } } override fun onRequestPermissionsResult( requestCode: Int, permissions: Array<String>, grantResults: IntArray ) { if (requestCode == PERMISSION_CAMERA_REQUEST) { if (isCameraPermissionGranted()) { // start camera } else { Log.e(TAG, "no camera permission") } } super.onRequestPermissionsResult(requestCode, permissions, grantResults) } private fun isCameraPermissionGranted(): Boolean { return ContextCompat.checkSelfPermission( baseContext, Manifest.permission.CAMERA ) == PackageManager.PERMISSION_GRANTED } companion object { private val TAG = MainActivity::class.java.simpleName private const val PERMISSION_CAMERA_REQUEST = 1 } }
Getting ProcessCameraProvider from ViewModel
6 . Create a new class CameraXViewModel which extends AndroidViewModel and add the below code.
CameraXViewModel.kt
class CameraXViewModel(application: Application) : AndroidViewModel(application) {
private var cameraProviderLiveData: MutableLiveData<ProcessCameraProvider>? = null val processCameraProvider: LiveData<ProcessCameraProvider> get() { if (cameraProviderLiveData == null) { cameraProviderLiveData = MutableLiveData() val cameraProviderFuture = ProcessCameraProvider.getInstance(getApplication()) cameraProviderFuture.addListener( Runnable { try { cameraProviderLiveData!!.setValue(cameraProviderFuture.get()) } catch (e: ExecutionException) { Log.e(TAG, "Unhandled exception", e) } }, ContextCompat.getMainExecutor(getApplication()) ) } return cameraProviderLiveData!! } companion object { private const val TAG = "CameraXViewModel" } }
- processCameraProvider : a livedata instance of ProcessCameraProvider. ProcessCameraProvider is used to bind the lifecycle of cameras to the lifecycle owner. This allows you to not worry about opening and closing the camera since CameraX is lifecycle aware.
- cameraProviderFuture.addListener(): Adds a listener to the cameraProviderFuture. It takes Runnable as one argument ( set cameraProviderLiveData value get from cameraProviderFuture ) and ContextCompat.getMainExecutor() as the second argument ( returns an Executor that runs on the main thread).
7 . Now in MainActivity, to observe the livedata instance of ProcessCameraProvider defined in CameraXViewModel add the below code:
private var previewView: PreviewView? = null
private var cameraProvider: ProcessCameraProvider? = null
private var cameraSelector: CameraSelector? = null
previewView = findViewById(R.id.preview_view)
cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()
ViewModelProvider(
this, ViewModelProvider.AndroidViewModelFactory.getInstance(application)
).get(CameraXViewModel::class.java)
.processCameraProvider
.observe(this) { provider: ProcessCameraProvider? ->
cameraProvider = provider
if (isCameraPermissionGranted()) {
bindPreviewUseCase()
bindAnalyseUseCase()
} else {
ActivityCompat.requestPermissions(
this,
arrayOf(Manifest.permission.CAMERA),
PERMISSION_CAMERA_REQUEST
)
}
}
Implement Preview Use Case
8 . Create new function bindPreviewUseCase() to implement preview use case. Inside bindPreviewUseCase() block, make sure nothing is bound to your cameraProvider, and then bind your cameraSelector and preview object to the ProcessCameraProvider instance.
private fun bindPreviewUseCase() { if (previewUseCase != null) { cameraProvider!!.unbind(previewUseCase) } previewUseCase = Preview.Builder() .setTargetRotation(previewView!!.display.rotation) .build()
//Attach the PreviewView surface provider to the preview use case. previewUseCase!!.setSurfaceProvider(previewView!!.surfaceProvider) try { cameraProvider!!.bindToLifecycle( this, cameraSelector!!, previewUseCase ) } catch (illegalStateException: IllegalStateException) { Log.e(TAG, illegalStateException.message ?: "IllegalStateException") } catch (illegalArgumentException: IllegalArgumentException) { Log.e(TAG, illegalArgumentException.message ?: "IllegalArgumentException") } }
Implement ImageAnalysis Use Case
9 . Create new function bindAnalyseUseCase() to implement analyze use case. Inside bindAnalyseUseCase() block, create BarcodeScanner instance , instantiate ImageAnalysis instance and setAnalyzer.
private fun bindAnalyseUseCase() { // Note that if you know which format of barcode your app is dealing with,
// detection will be faster val options = BarcodeScannerOptions.Builder() .setBarcodeFormats(Barcode.FORMAT_ALL_FORMATS).build() val barcodeScanner: BarcodeScanner = BarcodeScanning.getClient(options)
analysisUseCase = ImageAnalysis.Builder() .setTargetRotation(previewView!!.display.rotation) .build() // Initialize our background executor val cameraExecutor = Executors.newSingleThreadExecutor() analysisUseCase?.setAnalyzer( cameraExecutor, ImageAnalysis.Analyzer { imageProxy -> processImageProxy(barcodeScanner, imageProxy) } ) try { cameraProvider!!.bindToLifecycle( this, cameraSelector!!, analysisUseCase ) } catch (illegalStateException: IllegalStateException) { Log.e(TAG, illegalStateException.message ?: "IllegalStateException") } catch (illegalArgumentException: IllegalArgumentException) { Log.e(TAG, illegalArgumentException.message ?: "IllegalArgumentException") } }
10 . Create a new function processImageProxy(). In processImageProxy(), to recognize barcodes in an image, create an InputImage object. Then, pass the InputImage object to the BarcodeScanner’s process method.
private fun processImageProxy( barcodeScanner: BarcodeScanner, imageProxy: ImageProxy ) { val inputImage = InputImage.fromMediaImage(imageProxy.image!!, imageProxy.imageInfo.rotationDegrees) barcodeScanner.process(inputImage) .addOnSuccessListener { barcodes -> barcodes.forEach { barcode -> val bounds = barcode.boundingBox val corners = barcode.cornerPoints val rawValue = barcode.rawValue tvScannedData.text = barcode.rawValue val valueType = barcode.valueType when (valueType) { Barcode.TYPE_WIFI -> { val ssid = barcode.wifi!!.ssid val password = barcode.wifi!!.password val type = barcode.wifi!!.encryptionType tvScannedData.text = "ssid: " + ssid + "\npassword: " + password + "\ntype: " + type } Barcode.TYPE_URL -> { val title = barcode.url!!.title val url = barcode.url!!.url tvScannedData.text = "Title: " + title + "\nURL: " + url } } } } .addOnFailureListener { Log.e(TAG, it.message ?: it.toString()) } .addOnCompleteListener { //Once the image being analyzed
//closed it by calling ImageProxy.close() imageProxy.close() } }
11 . If the barcode recognition operation succeeds, a list of Barcode objects will be passed to the success listener. Each Barcode object represents a barcode that was detected in the image. For each barcode, you can get its bounding coordinates in the input image, as well as the raw data encoded by the barcode. Also, if the barcode scanner was able to determine the type of data encoded by the barcode, you can get an object containing parsed data.
When you run the app it will look like this:
Hi how can i put boxviewer