Jump to content
You must now use your email address to sign in [click for more info] ×

Peter Werner

Members
  • Posts

    268
  • Joined

Everything posted by Peter Werner

  1. Great news, congratulations, also for getting featured in Apple's presentation!!! :) One thing though – I already bought and downloaded it successfully, but when it starts, it says "Affinity Photo for iPad is not compatible with your iPad." (using an iPad Air). The requirements in the App Store clearly list iPad Air as being compatible in the "Compatibility" section (contradicting the text in the description). You're likely to get quite a few angry users (or very sad ones like me) who buy the app and then find out it won't run on their device. To be honest, I'd very much prefer a resolution/layer count limit like Procreate has it on older devices over a complete "your device is not compatible".
  2. There is no replacement for manual work in these situations. That being said, you might be able to take a few shortcuts, at least locally in certain areas. Here is what I can think of: If the lines are very regular, solving it in the frequency domain might work. In Affinity Photo, you can try the FFT Denoise filter, as well as Frequency Separation and see if any of these gets you anywhere. In this example the lines are not a regular pattern, so your results may vary. The lines are darker than the surrounding area. You may be able to take advantage of that. One approach would be to duplicate the image and put the top layer into Lighten blend mode and offset it horizontally by a few pixels. Then mask the layer to the areas where it looks good. You can also use the layer's blend curves to limit the effect to dark areas. This should get you 90% there in the areas of uniform color that are not covered in patterns, like the white portions of the domes. In some areas, duplicating the layer, applying a Maximum filter and switching that to Lighten may help. The Maximum filter will grow bright areas into darker ones, thus filling in the dark lines. Again, mask and apply blend curves as appropriate. You can start with a black mask and brush this in selectively over lines like you would use the clone stamp if need be. If you resort to the clone stamp, again remember to take advantage of Lighten blend mode. If you have a second image of the same scene taken from a minimally different viewpoint (eg. taken handheld and moved ever so slightly to either side in-between shots), you can align those images and apply a stacking mode (or blend mode). In your case, since the lines (cables?) are mostly closer than the background, they would cover different areas of the background in the two images due to parallax, whereas the background would remain largely identical, hence allowing you to eliminate the lines in post. Good technique to remember when you have to photograph something where a lamp post is really close to you and in the way and you can't move in a way that would get it out of your shot entirely. Of course neither of these will give you instant perfect results and it's likely that nothing will work for the entire image, but if you can use these techniques in individual portions of the image, the amount of manual cloning you'll have to do will at least be reduced significantly.
  3. Unfortunately, this feature is currently missing – I'd also like to see it, and also see the sliders vertically compressed so they are right under the histogram like in Photoshop instead of the two vertical lines that can be mistaken for histogram peaks. However, you can use a Curves adjustment layer to achieve the same effect. Input levels can be achieved by moving the right (white) and left (black) point horizontally while leaving them at the same vertical position. I think Photo doesn't have a clipping preview in Curves though, so you lose that. Output levels can be achieved by moving the black and white points vertically while leaving them at the same horizontal position. Lack of a clipping preview is usually not relevant here, unless you work on HDR floating point documents. Combinations of input and output levels can be achieved by moving the points both horizontally and vertically. You can approximate the Gamma slider by adding a point in the middle of your curve and moving that vertically. It's mathematically not the exact same thing as gamma (so don't use it if you need to reverse or match an exact numeric gamma curve/value precisely), but it's close enough to achieve the same effects and actually offers more control than a gamma slider. Note that the Alpha option in Curves seems to be somewhat broken currently.
  4. Swift doesn't interface with C++ code, which the portable part of the software is written in. So my guess is ObjectiveC++, which is basically ObjectiveC, but it allows you to mix in C++ code rather than only C code.
  5. The current Gradient Map feature is quite basic like the one in Photoshop, but in many ways, it could be a much more useful tool with the following additions: HSL Mode: Instead of going based only on Luminance, this would use the input hue or saturation as the lookup index. In combination with HSL blend modes, this would allow for some fantastic workflows like basically warping the color wheel to taste, similar to the "HSL Wheels" feature in Magic Bullet Colorista (Note: don't be fooled by the name of the feature, this is NOT referring to the three-way color corrector). Just use a gradient of the HSL spectrum and drag or re-define stops, set the result to "Hue" blend mode, and you have an extremely powerful color correction tool that gives you results that would be difficult to achieve in any other way. Circular Editor Option: Like the Colorama filter built-into in After Effects, this makes it easy to work on maps that are supposed to start and end with the same color. In combination with an input for a number of revolutions (cycles) to use, this would also make it easy to create gradient effects where a few stops are repeated multiple times across the spectrum (like, say, alternating black-white-black-white). This would also massively improve usability in conjunction with the HSL mode option suggested above. Access to swatches: This would make it easy to re-use gradients by defining them or recalling them from swatches as an alternative to using Adjustment Presets. Interpolation control: Sometimes the transition from one color to the next needs fine-tuning – this is something that Affinity's gradient editor already supports, but not in the Gradient Map dialog. A Constant Interpolation setting where the color would just be constant up until the next stop would also be useful since it would eliminate the need for duplicate stops in the same position, which are really hard to select. Possibly, the curve editor could also be re-used to define falloff using a Bézier or Catmull-Rom-Spline. Duplicate Stop option: Often, it is necessary to use the same color multiple times in a gradient. Adding a button for this and/or enabling Option+Drag to duplicate would be useful. Photoshop aggravatingly always inserts new stops with the same color instead of the color that is already there at that position in the gradient, but the (better) implementation of this in Affinity had the side effect that duplicating stops became harder. On-image sampling: While the dialog box is open, it would be useful to highlight the value under the mouse pointer in the gradient display to be able to place a stop exactly at the desired position. Clicking in the image would insert a stop. On-image highlighting: Conversely, when editing/dragging a stop in the gradient editor, an option to highlight the affected pixels in the image would be helpful. The options could be:off (nothing) all pixels that have exactly the value represented by the position of the stop (similar to focus peaking) the zone that will be affected in the image. The overlay would start at 100% intensity at the value represented by the stop and fall off to 0% on each side until the position of the next stop respectively, taking the falloff into account (see "Interpolation Control" above). Optionally, two different colors could be used to represent each side of the stop. Resizable dialog box for more precision: When editing 16-bit images or editing falloff from one stop to the next or when placing stops very close to each other, it would be useful to have more room to work with. Making gradient editor dialog boxes in the application resizable would alleviate this problem. Snap to Luminosity button: Sometimes, it is useful to place stops exactly at the point in the gradient that corresponds to their luminosity, especially when they are defined by selecting swatches from a color palette. Adding a button that moves all selected stops to that position would make this really quick. For instance, tinting an image with two tones while keeping black and white intact could be achieved very quickly by selecting a black-to-white preset, then adding two stop, selecting a color from the document color palette for each, and clicking that "Snap Selected Stops to Luminosity" button. Ability to move start and end stops. The values before the first stop and after the last one would simply use constant extrapolation. This would eliminate the need for duplicate stops, which take longer to create and are harder to edit since all operations need to be performed twice.
  6. That quote could mean a lot of things – it doesn't mean that it is necessarily a processing graph in the sense of a node-based compositor. Would be very cool if it was and if that power was exposed in the UI at some point. If it's not just a normal processing graph, it doesn't necessarily have to be acyclic – for instance, Maya's document model has a part that can include cycles (called the "DG"). And a node-based data structure in an object model doesn't necessarily have to be directed if it just describes certain relationships between objects, such as, say, linking attributes together to be changed in sync. Directed Acyclic Graph, (or short "DAG") is just the standard term for the type of graph we usually see in compositing applications. By talking about a DAG, I'm also implying that it is not a subtype with further restrictions. In our case, there being no further restrictions would mean the object model would allow for things like splitting branches and merging them again, whereas some more specific subtypes of a DAG, like Photoshop's layer model, or Xara's document tree model, might prevent that. But I don't think we need to get this mathematical about it :lol:
  7. Interesting, do you have any sources for the information that Affinity is node-based internally that go into a bit of detail? My impression is that its object model is based on a node structure like Xara, in such a way that each document element is a node of a tree, not necessarily a directed acyclic processing graph like a compositing software would use. I agree that bringing node-based compositing to the still image world would be a great achievement. While I think a good keyer is a valuable tool for still images, I'm not sure if using Affinity for keying image sequences via batch processing is such a good idea. I'd expect that it would be much easier to run the images through a dedicated video compositing software like Nuke, Fusion, After Effects or even Natron, as that would give you options like animating garbage masks and you could check if your settings work across the clip much easier without re-processing everything a bunch of times. If Affinity ever gets a Photoshop-type video timeline (and I suppose some form of video layer is coming for use with digital publishing in Publisher), applying a key to a video that way would be a much more feasible workflow in my view.
  8. I think YUV/YIQ/YCbCr (and to an extent LAB) would essentially cover the Kelvin Temperature and Tint system. I'm not sure how useful RGBW would be for keying, but it would certainly be interesting to test that. Affinity already supports multiple masks being combined non-destructively and applying Curves to masks works as well, that's already a step better than Photoshop. However, the mask system is extremely quirky. When adding Curves, it doesn't auto select the Alpha channel in the channel drop down, mask layers ignore blend modes (so the only mode you have available is essentially Multiply), layer effects like Blur are ignored, changing Opacity of Mask layers has the opposite effect of what you'd expect, and so on. You can in fact even add effect layers like Gaussian Blur to masks, but it doesn't work while the mask is applied to a layer. You have to first drag the mask out so it is a root-level layer, then nest a blur effect inside them, and then drag them back onto the layer you want to apply them to. It works, but it then hides the nested effect in the layers panel. To edit the parameters, you need to drag the mask out again and then re-apply it to the layer. Of course node-based compositing would make a lot of things much easier and solve the long-standing Photoshop issue of keeping lots of duplicated layers around, but I don't expect this fundamental approach to change any time soon. You could use Symbols, but I fear that might make things even more confusing. I would be happy already if the current quirks were worked out. That being said, I do like how flame offers a layer-based node (Action) inside a node-based system (Batch) so you essentially get the best of both worlds.
  9. 3D LUT Creator also has great color matching features that usually work much better than what Photoshop has. In general, I think any matching feature that is being developed or will be developed in the future should be implemented in a way that it generates editable adjustment layers, not "auto mode" ones like the one in Photoshop, leaving you with a bunch of pixels with no way to tweak the results or transfer/re-use them with other images.
  10. Except that Resolve has no RGB or YUV keyer, which can come in very handy as well. The Discreet Keyer in flame and combustion even has the option to use HSL, RGB and YUV in conjunction, but the results could be easily emulated by simply multiplying different keys/masks together. Prelight doesn't seem to allow toggling individual channels on and off from what I can tell from their website – that can be a very handy feature to be able to tell what your key is doing in each channel without manually resetting, say, the hue part, and undoing it (and remembering which changes you made to the other channels because these will inevitably also be revoked by the undo).
  11. Even if HSL blend range curves were added, they wouldn't allow you to mix images based on which pixel has higher saturation. They would allow you to mask areas where values exceed a certain saturation or where they fall below it, but not compare. Blend ranges always work only based on the value in one of the two layers. What you are asking would require something like a new "Maximum Saturation" blend mode. You might be able to hack something like that together using Apply Image, but it is destructive and currently doesn't have HSL support, so you'd have to whip up your own one-line formula to determine or approximate saturation based on RGB values. For these types of tasks, I wish there was something like support for SeExpr, GLSL shaders (like Matchbox in flame or the ill-fated AIF/Pixel Bender in Photoshop/After Effects), or even GLUAS that would allow you to accomplish this kind of thing easily yourself.
  12. I just saw this – having done scripting in both languages in different applications, I really, really, really hope this will be reconsidered when the time comes for that feature to be implemented. I'm actually a bit surprised to read that statement since I literally can't think of any use case that JavaScript covers chat Python doesn't, usually even cleaner and easier. Even basic things like splitting code into multiple source files and deriving from classes provided by the host application (say, MenuCommand, UiPanel, or FillerTextGenerator) are maddening and frustrating experiences in JavaScript, if they are possible at all. Imagine use cases like a basic InDesign IDML importer for Affinity Publisher. With an existing Python XML parser package, this is a fairly straightforward task. Or a "Place Article from RSS feed" command. Network access and RSS feed parser come standard with Python without installing anything and would make that type of thing a matter of only a few lines of code. With Adobe-type JavaScript scripting, I'd probably give up the idea of writing that script before even starting. Not to mention you guys would have a much easier time developing and maintaining the C++ host side if you can use boost::python versus some raw JavaScript engine API. Also, JavaScript will likely run scripts in separate engine instances for each script to avoid clashing function names and the like, making it really hard to communicate between different scripts, share code between commands and so on. Even running specific functions defined in a script file from the interactive script console etc. may be impossible. I realize that some folks favor JavaScript because they are already familiar with it, but I don't see how someone who is used to JavaScript wouldn't be able to easily pick up the necessary Python basics within a matter of minutes and be at a point where they could write pretty much everything they could in JavaScript in Python as well. The only real issue I see with Python is that all that power may clash with Apple's AppStore sandboxing rules.
  13. The full Resolve control panel is $30k, but there are cheaper and less space-consuming panels, like the ones from Tangent, or Blackmagic Design's own recently released Mini and Micro panels, which come in at roughly $3000 and $1000 respectively if I recall correctly. They are all centered around trackballs for a traditional three-way Lift/Gamma/Gain color corrector, which Affinity currently lacks, but it would be a very useful feature even without a control panel, so it would be worth adding anyway. Other than that, in addition to MIDI, DMX (a protocol used to control theatrical lighting) might also be worth looking into.
  14. Unfortunately, there is currently no way that I'm aware of to influence the random seed (or the Z slice coordinate, assuming the underlying noise is 3D). It would be cool if something like that was added as an additional parameter. That way it would be possible to both get consistent and reproducible results (handy for Actions, for example) but also influence the look at the same time, without having to re-run the filter a bunch of times like in Photoshop. As far as tiling goes, if I'm not mistaken, Perlin Noise should in theory always be periodic, but unfortunately I'm not sure what Zoom setting in combination with what image/layer size would be required to take advantage of that property inside of Affinity Photo. It would certainly be great if this was made easier. @Haluke: Photoshop's "Clouds" filter does not always give you a tillable result, only if the dimensions of the image are powers of two (which is usually the case for textures, though). By the way, the preview bug I mentioned earlier is still there in the Affinity Photo 1.5.2 update released yesterday.
  15. Yes, I think that looking at DaVinci's masking and keying, or at incredible tools like 3D LUT Creator really shows how much Adobe has been fast asleep on their laurels with their still photography software in the last few years. It's surprising how much of Affinity Photo has been primarily inspired by Photoshop and its sometimes cumbersome workflows. There is a whole other world out there if you look at Flame/Smoke, Resolve, Scratch, Nuke, and even After Effects, not just with respect to masking, keying and color correction. I'd love to see the folks at Serif look beyond Photoshop more and really redefine what a modern raster image processor can be, even if it comes at the cost of increasing the learning curve for switchers a bit.
  16. I have some hope that it might be related to Apple's rumored announcement of new iPad Pro models :ph34r:
  17. Just a quick note about that video: You can use the "Continuous" checkbox in the Export persona to have Affinity Photo continuously save your texture out in the background for you instead of manually re-exporting it every time you make a change. Not sure how much it would affect performance with a high-res image, but for reasonably small files, I think it's a lot more convenient.
  18. Affinity has this very handy split screen preview that is used for filter previews in Photo, Outline Mode in Designer and so on. It would be really cool if there was a way to also enable this for items from both the History and Snapshots panels to compare different states of the document side by side. This would of course be especially beneficial inside of Photo, but I can imagine it could also come in handy in Designer or Publisher.
  19. The workaround I'm using is to Copy & Paste from Affinity apps into Illustrator, then copy in Illustrator again and paste into InDesign. However, while text is intact and editable in Illustrator, is converted to something that is listed in the layers panel as "<unknown>" in InDesign. It seems to be losslessly scalable vector information, but it's not editable. Possibly some form of EPS or similar. Since InDesign can import neither PDF nor SVG (despite having had an easter egg that advertised the latter format for a long time) this seems to be the only way to get editable vectors across as far as I'm aware.
  20. The Affine filter (under Filter > Distort > Affine) may be what you are looking for. It works similar to Photoshop's Offset filter, but you can do rotation and scale as well. Since the rotation dial is fairly rough in its increments and also snaps to zero, it is impossible to select small numbers by dragging on it. You may want to mouse over that rotation widget and use the scroll wheel or trackpad for more precise control. You can also double click the rotation wheel to type in numbers. The caveat is that it will only accept whole degrees, not floating point numbers, which is not precise enough for larger images. It would definitely be nice to see this improved.
  21. I'd love to see a the following additions and fixes to the Channel Mixer feature: Ability to output to Grayscale from RGB. Currently, this is a bit cumbersome since it requires the user to set identical slider settings for 3 different channels (R, G, and B). The current Grayscale setting is only of rather limited use since it only supports Gray and Alpha as input channels. Access to "Spare Channels" as Affinity calls them. Ideally, these would show up in the Channel Mixer as additional sliders. An "Auto Normalize" checkbox. While sometimes a change in output luminosity is desired, I often find myself unnecessarily wasting time trying to manually normalize the output using the sliders once I'm theoretically happy with my balance. Use an NSSegmentedControl or similar instead of a popup for selecting the current channel (goes for multiple place in the UI). The popup requires an unnecessary additional click every time the user wants to switch between channels. When typing a value into an edit box and then switching channel without changing focus to another input field first, the value that was typed in is currently discarded and not applied.
  22. Reverse engineering the file format is basically just creating empty files in Fireworks, looking at them in a hex editor, changing one single thing like canvas size, then comparing in the hex editor and looking at which point the value is written, then changing another thing, again comparing where something changes. Create a layer. See where the layer counter is stored. See which section of binary data gets repeated for every layer. Change opacity of one layer, check where in the repeating layer structure that value gets saved. Change the name. Is there a counter preceding the text in the file that says how many characters are in the layer name? Add a Path with one point. See what type of data gets added. Add another control point to said path. Now you can find the counter for the number of control points and you know how coordinates of control points are saved. Maybe the data that gets added for a vector object is similar to the raster layer structure you identified. Maybe not. You can slowly map out the binary structure of the proprietary data piece by piece this way. It's not a matter of somebody on the Serif team having the skill. Essentially, you can probably do it yourself. Start with the PNG file format specification, fire up Fireworks, then look at files in a hex editor and identify the proprietary Fireworks chunks and then use your hex editor and/or binary diff tool and start documenting your findings. It's not that hard, just really, really time consuming and thus probably not a very wise way to spend precious developer time.
  23. I think this is just a feature that's going to be developed as part of Affinity Publisher and as such is just tied to the Publisher timeline and part of the workload of the Publisher team, who I'm sure have a lot on their plate right now, building a pro-level DTP software more or less from scratch. Just like we got an early version of what I assume will be the foundation of the master page sync and override system in the form of the Symbols feature in Designer, I think we may see linked files earlier as well as we get closer to the Publisher release date. I'm pretty sure the team at Serif are aware of how important this feature is.
  24. For those looking at thumbnail extraction, metadata, or the basic structure of the file format: This open source project and the corresponding documentation might be of interest While the speed at which new features are developed would probably make the native Affinity file format impractical as an interchange or final output/archival format, some fundamental details that don't change, such as accessing thumbnails, metadata, page sizes etc. would of course be extremely helpful. However, I do agree with the sentiment that a completely closed file format makes it problematic to blindly trust it with one's data. This is always an issue when a software is discontinued or a company goes away. We have seen this with Adobe/Macromedia FreeHand (most of the file format has been painstakingly reverse engineered, but to my knowledge certain parts still remain undecoded), Aldus/Adobe PageMaker, the proprietary chunks in Macromedia/Adobe Fireworks PNGs that hold the editable data, or recently, Serif's own complete Plus range. When your last computer (or VM) that can still run the original software breaks, as a user, you are basically locked out of your own files and can no longer make any edits. I think it would be a good middle ground if Serif simply made a public, legally binding commitment that in any scenario in which the software is discontinued, the company goes away, or read compatibility with old file format versions is broken, the complete file format will be documented and a full official specification published for anyone to freely access.
  25. When talking about a LUT editor, usually you'd think of a tool that does things like resampling, converting between LUT file formats, generating LUTs to transform between color spaces and tone curves (say, F-Log to Rec709 or similar), generate inverse LUTs, combine them, convert between ICC profiles and 3D Luts etc., similar to the excellent Latice for example. While 3D LUT Creator can load and save LUTs, its main appeal are its rather advanced color correction features, most of which actually have very little to do with actual LUTs and could be implemented without ever touching a single 3D LUT file (some of the color space mesh warps use similar concepts, though). The software just happens to allow you to save these corrections out as LUTs. 3D LUT Creator is essentially a standalone image editor with 3D LUT support. As such, asking for a "3D Lut Editor" feature in Affinity Photo is not really the right request if we are looking for the color correction features similar to 3D LUT Creator. It's a bit like asking for a PDF editor when you are really interested in page layout features. That's all I'm saying. I have indeed tried the software a few times in the past and seriously considered buying it, as I think its unique color correction tools are extremely useful, but I found it way too unstable for now (at least under OS X).
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.