Hi, I know this has come up a coupletimes before, but it would be really useful to have some way to automate inputting metadata into the NLP custom metadata fields, rather than having to type them in by hand.
Previous threads have mentioned using exiftool to automatically apply metadata to the exported file, but I would like to be able to specifically input the NLP custom metadata fields, so that I can use the filtering/searching features in Lightroom on my original scans and not the exports.
My suggested interface would basically just be a menu where you could select some photos, activate the feature, and select a CSV or JSON file or whatever with correctly-named fields, and it would import each corresponding record into the metadata fields of the selected photos.
I once tried to implement this myself as a separate plugin but then found out that Lightroom does not allow other plugins to access NLP’s custom metadata fields whatsoever, so I had to give up on that. Unfortunately, it seems like this would have to be built into NLP itself.
Metadata presets don’t help with entering lens, aperture, and shutter information for individual shots, which is the most tedious part. They’re great for commonly-used film/developer combos or for setting camera/lens/focal length information for fixed-lens cameras, though.
There are some apps like Lightme Logbook that make this actually quite easy. Also, of course, there are cameras like the Nikon F6 that did in fact record exposure data for each shot, which can be easily imported as a CSV or similar file.
I am using it for both learning and technical/testing purposes. For example, to determine things like: which focal lengths actually produce my favorite images, am I zone focusing correctly in various lighting conditions, am I metering correctly, etc. The data might not “improve the image itself” but still be very valuable.
I use an Android app called EXIF Notes to record relevant metadata for my valuable shots (not everything), and it exports CSV (as well as a batch file to automatically apply the data using EXIFTools, which isn’t suitable for use with RAW files, of course). So an option to import structured metadata into NLP extended metadata panel would be helpful. But seeing as Nathan is the sole developer, I’d rather he spent his time on the core functionality
Of course, if he gets some spare time during a vacation…
So to change any of these via another plugin, you would just do something like:
-- access the LrApplication namespace
local LrApplication = import 'LrApplication'
--- get the active catalog
local catalog = LrApplication.activeCatalog()
-- get the currently selected photo
local photo = catalog.targetPhoto
-- changes to the catalog data need to be done with write access
catalog:withPrivateWriteAccessDo(function(context)
local nlpPlugin = "com.nate.photographic.negative"
local shutterSpeed = "1/8"
photo:setPropertyForPlugin(nlpPlugin, 'nlpShutterSpeed', shutterSpeed)
end )
In theory, it would be easy for me to take a a JSON file and import the metadata onto a single photo…
The thing I’m not sure about is how we would do this in bulk in a way that makes sure that the data from the JSON file is properly matched to the correct photos in Lightroom… it would basically have to assume that the order of the json file matches perfectly with the order of your images in Lightroom.
Maybe that’s OK? But it seems to me that there were be a lot of opportunity for mismatches.
I tried this before but got dismayed when I found this forum thread which claims there is no way. Hoping for the best, I tried your instructions:
For me, this gives the following error (once I also wrapped it in LrTasks.startAsyncTask and so on):
Indeed, LrPhoto:setPropertyForPlugin’s documentation says it must receive the _PLUGIN object itself, not just the ID (the latter is acceptable for getPropertyForPlugin for example, but not for set...)
I’ve uploaded my attempt here: GitHub - bcc32/nlp-metadata-importer, if you want to try to reproduce the error. I am running the latest LrC (12.2).
This seems fine to me, but you could also make it more robust by requiring each JSON object for a photo to specify the file name of that photo, for example. Ultimately, it would still be incumbent on the user to make sure the JSON actually matches the photos correctly.
The app I’m currently using is called Lightme Logbook. It can export both CSV and JSON files for each roll once it’s unloaded. I’ve linked an example of each; the CSV should be self-explanatory, and the JSON is designed to be used with exiftool if you want to write the metadata directly to the image file (rather than to Lightroom custom fields).
BTW, I wouldn’t mind writing any translation script necessary between the data format used here and whatever you would prefer to use for import (I mostly just find the manual data entry tedious).
I think the main things I would love to be able to sync are aperture, shutter speed, lens make/model, and focal length. I don’t really mind inputting the other fields (camera, film stock, etc.) by hand or by metadata preset, since they can be done for the whole roll at once.
I came looking for exactly this feature so big +1.
A few assorted comments as bcc32 has done an excellent job of explaining what would be useful.
The samples provided from the Logbook app (which I also use) do already include the file name.
I think that exiftool JSON format would make a lot of sense as a data format as this is what I’ve observed most iOS apps that support this functionality using. Like bcc32 I would also be very happy to mangle the data from one format to another.
I tried to save metadata to file after populating some of the NLP fields but found them not to be present. If it were possible for these to be read/written from an XMP sidecar I’d have all I need but it would certainly not be as smooth as a JSON import from NLP.
I care about importing the GPS coordinates also so that when I export I can see photos clustered by location. Totally doable with exiftool today as we can safely write the standard fields but it’d be neat if NLP did this and saved another step.
I’m interested in this question as well. I’m the creator of Crown + Flint, a mobile app for analog photographers who want to capture metadata like camera and lens model and settings, film stock, geolocation, etc. Crown + Flint exports JSON for consumption by Exiftool, but I’d be very interested in creating a tighter integration with NLP!
We try to solve the correlation problem by capturing reference images for each shot, to help verify which set of metadata goes with each frame on film, by the way. It seems the most reliable method, when paired with chronological order.
My previous comment here was made before NLP v3 was released, and there were still a lot of improvements I was hoping to see in the core conversion functionality. Since then, NLP v3 has come out, and I’ve switched to using Crown+Flint, which has led to me recording more shooting info because it’s even easier than before.
So now I’m hoping that something can be done to find a way to import this info from C+F and maybe other apps people use into the metadata schema that NLP provides in LR.