Warehouse teams in the IT Asset Disposition (ITAD) industry spend hours every day manually photographing equipment, looking up model numbers, and typing out online listings. For MSP Disposal, a leading ITAD company, this bottleneck was costing real money and slowing down their entire operation.
We built them a native iOS app that pairs with Meta Ray-Ban smart glasses to scan warehouse inventory hands-free, run AI-powered product detection on every captured image, and generate ready-to-publish online listings in seconds. What used to take 3–5 minutes per item now takes under 10 seconds.
This is the story of how we built it, the technical decisions behind it, and the measurable impact it had on MSP Disposal's productivity.
Table of Contents
- 1.The Problem: Manual Listing Is a Productivity Killer
- 2.The Solution: Meta Glasses + AI Product Detection
- 3.Technical Architecture & Stack
- 4.Meta Glasses Integration: How It Works
- 5.AI-Powered Product Detection Pipeline
- 6.The App: Key Screens & User Flow
- 7.Setup & Configuration Guide
- 8.Results: How It Benefited MSP Disposal
- 9.Lessons Learned & What's Next
- 10.Build Something Similar With Lushbinary
1The Problem: Manual Listing Is a Productivity Killer
MSP Disposal processes thousands of IT assets every month — servers, networking gear, storage arrays, laptops, and peripherals. Each item needs to be photographed, identified, cataloged with specs, priced against market data, and listed for sale. Before our solution, the workflow looked like this:
- Walk to the item in the warehouse
- Pull out a phone or camera, take multiple photos
- Manually read the model number and serial number off the label
- Go back to a workstation and look up specs online
- Research eBay and internal pricing for comparable items
- Type out the listing with description, specs, and pricing
- Upload photos and publish
Average time per item: 3–5 minutes. At 200+ items per day, that's over 10 hours of manual data entry — just for listing.
The real cost wasn't just time. Manual entry introduced typos, inconsistent descriptions, and missed pricing opportunities. Items sat unlisted for days, losing value in a market where prices depreciate fast.
2The Solution: Meta Glasses + AI Product Detection
We proposed a hands-free scanning solution: warehouse staff wear Meta Ray-Ban smart glasses, walk through the warehouse, and capture photos with a voice command or tap. Each photo is instantly sent to an AI backend that identifies the product, extracts specs, pulls market pricing, and generates a listing — all before the worker reaches the next item.
The core idea: eliminate the gap between seeing an item and listing it. No workstation needed. No manual lookups. No typing.
Hands-Free Capture
Meta Ray-Ban glasses stream photos directly to the iOS app via Bluetooth
AI Detection
Backend AI identifies model, manufacturer, specs, and category from a single photo
Instant Pricing
eBay and MSP average sale prices pulled automatically for each detected product
3Technical Architecture & Stack
The app is built as a native iOS application in SwiftUI, designed to run on iPhones paired with Meta Ray-Ban Gen 2 glasses. Here's the stack:
| Layer | Technology | Purpose |
|---|---|---|
| iOS App | SwiftUI + Combine | Native UI, reactive state management |
| Glasses SDK | Meta MWDAT (Wearables Device Access Toolkit) | Bluetooth pairing, camera streaming, photo capture |
| Networking | URLSession + async/await | Multipart image upload, JSON API calls |
| AI Backend | Custom API (Node.js) | Product detection, spec extraction, pricing lookup |
| Auth | JWT token-based | Email/password login, OTP password reset |
| Image Processing | CoreGraphics | JPEG compression, resize for upload optimization |
| State Management | ObservableObject + @Published | Reactive UI updates across views |
The architecture follows a clean service-based pattern. Each concern — glasses connectivity, product detection, authentication, image storage — lives in its own singleton service class. Views observe these services reactively through Combine's @Published properties, keeping the UI layer thin and testable.
4Meta Glasses Integration: How It Works
Integrating with Meta Ray-Ban glasses uses Meta's Wearables Device Access Toolkit (MWDAT), which provides Bluetooth device discovery, camera permission management, and a streaming session API. Here's the connection flow we implemented:
Connection Flow
- Registration: The app checks if the device is already registered with the Meta AI companion app. If not, it triggers the registration flow which opens Meta AI for Bluetooth pairing.
- Camera Permission: Once registered, the app requests camera access from the glasses. We implemented a retry mechanism (up to 5 attempts with 2-second delays) because Bluetooth device discovery can take a moment after registration.
- Device Discovery: The app waits for the glasses to appear in the device stream, confirming they're discoverable and ready.
- Stream Session: A
StreamSessionis created with raw video codec at low resolution and 15fps, optimized for product scanning rather than video recording. - Photo Capture: When the user taps “Capture Now” or enables auto-capture, the app listens to the session's
photoDataPublisherand converts the JPEG data into a UIImage.
Developer tip: The Meta AI companion app must be installed on the same device. The SDK communicates with it via XPC, and we added specific error handling for cases where the XPC connection is invalid — a common issue during initial setup.
We also built a mock mode for development and testing. Using Meta's MockDeviceKit, we simulate a paired device that returns a test image on capture. This let our team develop and test the full capture-to-detection pipeline without physical glasses — critical for fast iteration.
MetaGlassesService.swift — Connection & Photo Capture
class MetaGlassesService: ObservableObject {
static let shared = MetaGlassesService()
@Published var isConnected = false
@Published var isSessionActive = false
@Published var errorMessage: String?
private(set) var streamSession: StreamSession?
// MARK: - Connection with retry for Bluetooth discovery
func connect() async throws {
let wearables = Wearables.shared
// Check registration state — skip if already registered
var alreadyRegistered = false
for await state in wearables.registrationStateStream() {
alreadyRegistered = (state == .registered)
break
}
if !alreadyRegistered {
try await wearables.startRegistration()
for await state in wearables.registrationStateStream() {
if state == .registered { break }
}
}
// Retry camera permission (Bluetooth discovery needs time)
let maxRetries = 5
for attempt in 1...maxRetries {
do {
let status = try await wearables.requestPermission(.camera)
if status == .granted { break }
throw MetaGlassesError.permissionDenied
} catch {
if attempt < maxRetries {
try? await Task.sleep(nanoseconds: 2_000_000_000)
} else { throw error }
}
}
// Wait for glasses to appear
for await devices in wearables.devicesStream() {
if !devices.isEmpty { break }
}
await MainActor.run { self.isConnected = true }
}
// MARK: - Stream Session
func startSession() async throws {
let config = StreamSessionConfig(
videoCodec: .raw, resolution: .low, frameRate: 15
)
let session = StreamSession(
streamSessionConfig: config,
deviceSelector: AutoDeviceSelector(wearables: Wearables.shared)
)
self.streamSession = session
await session.start()
}
// MARK: - Photo Capture
func capturePhoto() async throws -> UIImage {
guard let session = streamSession else {
throw MetaGlassesError.noActiveSession
}
return try await withCheckedThrowingContinuation { continuation in
var resumed = false
let token = session.photoDataPublisher.listen { photoData in
guard !resumed else { return }
resumed = true
if let image = UIImage(data: photoData.data) {
continuation.resume(returning: image)
} else {
continuation.resume(throwing: MetaGlassesError.connectionFailed)
}
}
session.capturePhoto(format: .jpeg)
}
}
}5AI-Powered Product Detection Pipeline
The moment a photo is captured, it enters our detection pipeline. The flow is fully asynchronous — the user can keep scanning while previous images are being processed.
Detection Pipeline
/ai-product-detection/detect endpoint with the user's auth token and a unique image reference ID.Each gallery item tracks its own upload status — pending, uploading, success, or failed — displayed as color-coded badges in the gallery grid. Failed uploads show the specific error reason so the user can retry or skip.
ProductDetectionManager.swift — Capture → Upload → Detect
class ProductDetectionManager: ObservableObject {
static let shared = ProductDetectionManager()
@Published var galleryItems: [GalleryItem] = []
/// Add a captured image and immediately kick off upload
func captureAndUpload(image: UIImage, token: String?) {
let item = GalleryItem(image: image)
DispatchQueue.main.async {
self.galleryItems.insert(item, at: 0)
}
Task { await upload(item: item, token: token) }
}
private func upload(item: GalleryItem, token: String?) async {
await MainActor.run { item.uploadStatus = .uploading }
do {
let response = try await ProductDetectionService.shared
.detectProduct(image: item.image, token: token)
if response.success {
let result = ProductDetectionResult(from: response, image: item.image)
await MainActor.run {
item.detectionResult = result
item.uploadStatus = .success
}
} else {
let reason = response.reason ?? response.error ?? "Detection failed"
await MainActor.run { item.uploadStatus = .failed(reason) }
}
} catch {
await MainActor.run { item.uploadStatus = .failed(error.localizedDescription) }
}
}
}ProductDetectionService.swift — Image Upload & API Call
func detectProduct(image: UIImage, token: String?) async throws -> ProductDetectionResponse {
// Compress aggressively for fast upload
guard let imageData = image.jpegData(compressionQuality: 0.5) else {
throw ProductDetectionError.compressionFailed
}
// Resize if still too large
var finalData = imageData
if imageData.count > 2_000_000 {
if let resized = resizeImage(image, maxDimension: 1024),
let data = resized.jpegData(compressionQuality: 0.5) {
finalData = data
}
}
let boundary = UUID().uuidString
var request = URLRequest(url: URL(string: "\(APIConfig.baseURL)/ai-product-detection/detect")!)
request.httpMethod = "POST"
request.timeoutInterval = 60
request.setValue(token, forHTTPHeaderField: "Authorization")
request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
// Build multipart body with image + reference ID
var body = Data()
body.append("--\(boundary)\r\n".data(using: .utf8)!)
body.append("Content-Disposition: form-data; name=\"image\"; filename=\"capture.jpg\"\r\n".data(using: .utf8)!)
body.append("Content-Type: image/jpeg\r\n\r\n".data(using: .utf8)!)
body.append(finalData)
body.append("\r\n--\(boundary)--\r\n".data(using: .utf8)!)
request.httpBody = body
let (data, _) = try await URLSession.shared.data(for: request)
return try JSONDecoder().decode(ProductDetectionResponse.self, from: data)
}ProductDetectionModels.swift — AI Response Model
struct ProductDetectionResponse: Codable {
let success: Bool
let modelName: String?
let manufacturer: String?
let description: String?
let specs: SpecsValue?
let qty: Int?
let category: String?
let potentialBuyerCount: Int?
let potentialBuyers: [PotentialBuyer]?
let imageUrl: String?
let eBayAvgSalePrice: String?
let mspAvgSalePrice: String?
let reason: String?
let error: String?
}
struct ProductDetectionResult: Identifiable {
let id = UUID()
let modelName: String
let manufacturer: String
let description: String
let specs: String
let qty: Int
let category: String
let potentialBuyerCount: Int
let eBayAvgSalePrice: String
let mspAvgSalePrice: String
let capturedImage: UIImage?
}6The App: Key Screens & User Flow
The app is intentionally simple. Warehouse workers aren't developers — the UI needs to be obvious and fast. Here's the flow:
Splash Screen
Animated typewriter effect showing the MSP Disposal brand. Checks login status and routes accordingly.
Login / Sign Up
Email and password authentication with OTP-based password reset. Keyboard-aware scrolling for smooth mobile input.
Home (Scanning Hub)
The main interface. Shows connection status, Connect/Disconnect buttons, Capture Now for single shots, and Auto Capture mode that takes a photo every 5 seconds.
Gallery
A 2-column grid of all captured images with real-time upload status badges (pending, uploading, success, failed). Tapping an item shows the full AI detection result.
The tab bar provides two-tap navigation between Home and Gallery. A logout button in the toolbar lets users switch accounts. The entire app supports both light and dark mode with MSP Disposal's brand colors (deep purple #140930 and gold #CCB168).
7Setup & Configuration Guide
For teams looking to deploy a similar solution, here's what the setup involves:
Prerequisites
- Meta Ray-Ban Gen 2 smart glasses
- iPhone running iOS 17+ with the Meta AI companion app installed
- Xcode 15+ for building and deploying the app
- Meta Wearables Device Access Toolkit (MWDAT) SDK — includes
MWDATCore,MWDATCamera, andMWDATMockDevice - Backend API endpoint for AI product detection
Configuration Steps
- Clone the repository and open the
.xcodeprojin Xcode. - Configure the API base URL in
Config.swift. The app uses environment-based configuration (dev, staging, production) managed through Xcode build schemes. - Add the MWDAT SDK frameworks to your project. These are provided by Meta through their developer portal and include the core Bluetooth communication layer, camera APIs, and mock device support.
- Register a URL scheme in
Info.plistfor the Meta AI app callback. The app handles this in itsonOpenURLmodifier, passing the URL toWearables.shared.handleUrl(). - Install custom fonts (IvyStyleTW and Paramount) in the project bundle. The app registers them at launch via
CTFontManagerRegisterFontsForURL. - Build and run on a physical device. The Meta glasses SDK does not work in the iOS Simulator — use mock mode for simulator testing.
CI/CD: We set up Fastlane for automated builds and distribution, making it easy to push updates to the MSP Disposal team without manual Xcode builds.
App Entry Point — URL Scheme Handler for Meta AI Callback
import SwiftUI
import MWDATCore
@main
struct MSPDisposalApp: App {
init() {
AppTheme.registerFonts()
}
var body: some Scene {
WindowGroup {
SplashView()
// Handle URL callbacks from Meta AI app for registration
.onOpenURL { url in
Task {
do {
try await Wearables.shared.handleUrl(url)
} catch {
print("Failed to handle URL: \(error)")
}
}
}
}
}
}Config.swift — API Endpoints
struct APIConfig {
static let baseURL = "https://api.mspdisposal.com"
struct Endpoints {
static let signin = "/auth/signin"
static let signup = "/auth/signup"
static let detectProduct = "/ai-product-detection/detect"
static let forgotPasswordMobile = "/admins/forgot-password-otp"
static let verifyOTP = "/admins/verify-otp"
static let resetPasswordMobile = "/admins/reset-password-otp"
}
static let timeoutInterval: TimeInterval = 30
}8Results: How It Benefited MSP Disposal
The impact was immediate and measurable. Here's what changed after deploying the Meta Glasses app:
From 3–5 minutes per item to under 10 seconds
Workers process 600+ items vs. 200 previously
AI handles model, specs, description, and pricing
Items listed the same day they arrive in the warehouse
Beyond the numbers, the qualitative feedback was strong. Warehouse staff reported that the glasses felt natural to use — no stopping to pull out a phone, no juggling devices while handling equipment. The auto-capture mode was particularly popular: workers just walk through aisles and the app captures every 5 seconds, building a complete inventory gallery automatically.
Buyer matching was an unexpected win. The AI detection response includes potential buyers from MSP's database who have purchased similar equipment before. This turned the app from a listing tool into a sales tool — staff could proactively reach out to buyers before items even hit the marketplace.
9Lessons Learned & What's Next
Building a production app on top of Meta's wearable SDK taught us a few things worth sharing:
- Bluetooth is unpredictable. Device discovery timing varies. Our retry mechanism for camera permissions (5 attempts, 2s delay) was essential. Without it, first-time connections failed about 40% of the time.
- Mock mode is non-negotiable. You can't develop a glasses app efficiently if every test requires physical hardware. Meta's MockDeviceKit made it possible to iterate on the full pipeline in the simulator.
- Image size matters more than quality. Warehouse photos don't need to be high-res for AI detection. Compressing to 50% JPEG and capping at 1024px cut upload times by 70% with no loss in detection accuracy.
- Async everything. The capture → upload → detect pipeline runs fully asynchronously. Users never wait for one image to finish before capturing the next. This was critical for the auto-capture mode to feel seamless.
- URL scheme handling is easy to miss. The Meta AI app communicates back to your app via a URL scheme after registration. If you forget the
onOpenURLhandler, pairing silently fails with no error.
Looking ahead, we're exploring:
- Real-time video stream analysis (detecting multiple items in a single pass)
- Voice commands for hands-free control (“capture”, “next aisle”, “show pricing”)
- Barcode and serial number OCR directly from the glasses camera feed
- Android companion app for broader device support
10Build Something Similar With Lushbinary
The MSP Disposal project is a great example of what happens when you combine wearable hardware, AI, and a clear business problem. The technology isn't the hard part — understanding the workflow and building something that warehouse staff actually want to use is.
At Lushbinary, we specialize in building custom software that solves real operational problems. Whether you're looking to integrate wearable devices, build AI-powered workflows, or modernize a manual process, we'd love to talk.
If you're in the ITAD space, logistics, warehousing, or any industry where hands-free scanning and AI detection could save your team hours every day — reach out. We'll show you what's possible.
Ready to Build Your Wearable + AI Solution?
Tell us about your workflow and we'll design a solution that fits. No obligation, just a conversation about what's possible.
Build Smarter, Launch Faster.
Book a free strategy call and explore how LushBinary can turn your vision into reality.
