Jump to content
You must now use your email address to sign in [click for more info] ×

Peter Werner

Members
  • Posts

    267
  • Joined

Everything posted by Peter Werner

  1. Digital Publishing in InDesign is also largely not very good, mostly due to Adobe trying to monetize the hell out of this market with their hosted solutions, or recently their push to the cloud system with their Publish Online feature – your client's publication stops working if you have an issue with your Adobe subscription or cancel it. You can't host it on your own server. Great. Before that, there was this horrendously expensive abomination called DPS with its Flash-based UI panels, ginormous files and messy bugs. One thing where InDesign is particularly horrible is with authoring interactivity parts. In an effort to make it simple, they came up with this unnecessarily complicated preset-based system for animations, whereas a simple timeline like in Macromedia Director and its Behaviors (scripted procedural animations you could apply with a single click) would make everything so much more intuitive. Scripting basic interactivity inside the application? Not a chance. Also creating slideshows, buttons, scrolling areas and any kind of multi-state object is a royal pain. It is really unintuitive to select objects after these compounds are created. It all feels like someone hacked a plugin onto the application instead of really thinking about how it could properly fit into the software architecture. Another problem is that half of the interactivity features only work with PDF, others only with Flash, others are exclusive to ePub, others are for DPS, and so on. There is just no consistent concept that unifies all the interactivity features, there are all sorts of different previews for different workflows, and half of them are obsolete because Adobe had to change course. While I do think Publisher should tackle print production features first, interactive digital publications are certainly an area where the experience in InDesign is severely lacking and a new product like Affinity Publisher could finally get things right. Simple straightforward HTML5 output like the In5 plugin for InDesign offers (with support for Designer's constraints so that documents can adapt to the screen size), and ePub would cover the essentials for Affinity Publisher 1.0 I think. Features could then be expanded starting from there.
  2. A LUT is basically just a baked color correction or color transform. You can think of it as a pre-computed color correction preset. Despite the slightly misleading name, 3D LUT Creator has little to do with LUTs, it's basically just its method of exchanging corrections with other software. DaVinci Resolve is more like Lightroom for cinema, whereas 3D LUT Creator can be thought of more like a Lightroom or Photoshop plugin. It's not a question of learning one or the other as they are completely different programs. Affinity Photo already has support for 3D LUTs. It's implemented as an adjustment layer like in Photoshop and allows you to open and apply 3D LUT files, and also to save color adjustments made inside Affinity to LUT files. Though I vaguely remember something being slightly broken in Affinity's LUT support last time I checked. Here are some of the color correction features from 3D LUT Creator that I have yet to see in any other software: Obviously the color space mesh warp features/2D Curves The Linear color matrix and grayscale conversion editing using a triangle (seems like it's interpreting the components of the color matrix as barycentric coordinates so you can essentially move corner points of a triangle to edit the matrix). I'd love to see this as a new adjustment layer type. The color chart matching tool, including the option to create custom presets. Resolve has a rudimentary version of this as well, but allows no custom targets. Adobe software needs clumsy third party workarounds like XRite Color Checker etc. that output custom ACR camera calibration profiles for every shooting situation. 3D LUT Creator's tools are great to create LUTs to match color reproduction from different cameras, whereas the XRite/Adobe hack only lets you neutralize it. Analyzing color distribution (called "Color Sort" in 3D LUT Creator) and the associated color matching feature (seems to work really well) HALD workflow (a few programs like DaVinci can do this, but not with tone curve extraction etc.) As far as I am concerned, these things would be spectacularly useful to have in a software like Affinity Photo.
  3. Dave, thank you very much for your answer. I'm currently using SVG as sort of a workaround for the lack of built-in scripting. I first ran into SVG's blend mode limitations when I was trying to get files from Mischief (a vector painting program that keeps path data but has no vector export option) into Designer. Paint strokes can easily be saved as paths, but erase strokes cannot be represented in SVG. Other uses for this type of workflow are procedurally generating vector particles, charts or similar things for further editing or integration in Designer. For example, I've recently experimented with a Python script to generate vector effects that are a bit similar to an After Effects plugin called Plexus for a project. Especially with particles, access to all blend modes would be useful. I'd be careful with exporting since it would lead to files that don't conform to the SVG specification. When importing, you'd be dealing with files that other software or scripts sort of specifically created for exchange with Designer, without the potential to really break anything. But when exporting, chances are high that unsuspecting users create broken files by accident. It's rarely needed for scripting to get data out of the application while preserving 100% visual appearance.
  4. I'm using SVG as an exchange format for a project to get data into Affinity Designer. The SVG standard does not support all blend modes that Designer supports, for instance the very useful "Erase" option is not supported. It would be cool if the SVG importer would simply accept the non-standard blend modes that are supported in Designer but not officially part of SVG instead of reverting to "Normal".
  5. I've once typeset a booklet with mathematical formulas for students in Adobe InDesign for a client. While there are plugins like InMath, they are expensive and clumsy since they rely on hacks like baseline shifts and so on. There was another plugin that can be used an external formula editor like in Word, but it turned out the formulas didn't print properly and there were incorrect characters. Not to mention that these solutions are way too expensive if you only have a one-time project. Back then, I, too, found LaTeXIt! for Mac to be the best solution since it allows you to type out small snippets of LaTeX and save it out to a PDF using XeTeX (XeTeX being a key part since this allows access to OpenType fonts to customize the look to match the text in the publication) or copy and paste it into other applications. But it's not ideal since customizing the fonts, look and kerning of the formulas takes loads of boilerplate code since LaTeX was never meant for lots of customization. Not to mention every formula was basically a linked image and had to be re-built for even the smallest change. I don't think actual LaTeX integration would make a lot of sense. But I do think that built-in formula editing support for Affinity Publisher ("Maths Persona"?) would be a fantastic feature. Additionally, a way to quickly type LaTeX code simply as an input method (without having LaTeX installed, the formula subset would be enough) and a way to import MathML would make it really quick and flexible to create math-heavy documents. It's not just a feature that benefits the few people who are professionally printing math-heavy publications. I'm certain that such a feature would position Affinity Publisher as a really attractive solution for teachers and educators for creating work sheets and the like. This demographic is currently mostly on Word because something like InDesign is just way out of the price range and learning curve that these people would consider. But at $49, I think it would be a no-brainer for every teacher. All it would probably take is a decent math typesetting feature for Publisher and a non-destructive graph plotting tool for Designer. The expressions parser is already there as part of the text input fields…
  6. If you want to mimic what the old Ultimatte boxes used in television used to do, you can use Apply Image and use SG-MAX(SR,SB) as an expression for each red, green and blue (set alpha to 1 or simply leave it at sa). That's going to give you a grayscale mask. Clean that up with Levels, Invert, and then go Layer > Rasterize to Mask. You can save that as a macro and it's as close as it gets to a one click solution. An alternative approach that is also quite quick is to use is Select > Select Sampled Color. Can't test how well it works with green screen material right now, though, since I can't use selections due to this annoying bug, but it should work pretty well until Affinity gets a real HSL keyer.
  7. I've tried using Symbols for these kinds of things in the past – symmetry painting, seamless textures etc., but I found that this workflow is still fairly buggy in the current state and that vector brushes are more reliable than raster layers inside symbols right now. For instance, if the instance of a symbol is created after the raster layer is created (i.e. if you don't create the symbol and instances first, then the raster layer within it), the contents of the raster layer in the new symbol instance is frozen at that state and won't be updated. I also remember having some issues with the transformation of raster layers being off inside symbols when I had to create a kaleidoscope effect for a client.
  8. RGB Curves in Saturation blend mode are definitely an interesting and underused tool. However, they are fairly difficult to use with precision, compared to HSL curves. Also, HSL curves in professional film and video color grading software are usually horizontal Bézier splines with handles and not Catmull Rom splines like in Photoshop-style Curves, so the falloff of the correction can be controlled very precisely and you're not wrestling a curve that tends to just explode when you try to add lots of local changes. Both of course have advantages and disadvantages, though. Exactly, I've actually posted this feature request some time ago of how I'd envision that to work. I also think that the current Blend Curves feature could be made a bit more intuitive and easy to control by adding a preview function so you can actually tell what you are affecting without going back and forth between really extreme color correction settings and the values you actually want all the time.
  9. Well, for most features, there is always a way to achieve the same result. You can't do anything with Levels or Brightness/Contrast that's not already possible with Curves. You can press Cmd+D to deselect, click outside the selection with a Marquee tool, or go to the menu. The question is if a certain feature makes the workflow significantly more efficient. I think that actual DaVinci-style hue curves fulfill that requirement, in fact, much more so than for instance Brightness/Contrast would. The workarounds here require tweaking values in at least two different dialog boxes. Things like Saturation vs. Luminance (e.g. darken all saturated colors) aren't possible with blend ranges, and even simple stuff like brightening the reds does not offer the same level of control with a B&W adjustment layer set to Luminosity blend mode or the current HSL tool since you only have fixed ranges and linear falloff. Even for Hue vs. Hue, the Bézier curves in a software like DaVinci just give you better control and are much, much quicker compared to messing around with a linear HSL keyer like the one in Photoshop's Hue/Saturation feature – I'd say maybe 4 seconds versus about 30. Not to mention that all these adjustments get really complicated once you want to make corrections to more than one color (say, adjust your reds and yellows) and you have to click back and forth between lots of different dialog boxes and popups, especially if you want to tweak all of them afterwards to get your edits to balance out. Affinity's Blend Curves are amazing and I use them a lot, but it's going to take me ten times as long to figure out what's going on when I open a document with five different HSL adjustment layers which each have Blend Curves applied versus looking at one single simple, straightforward HSL curve instead. For simple use cases like desaturating shadows, no doubt, Blend Curves work perfectly fine for that. But for more complex adjustments in a professional environment where speed matters, I think that real DaVince-style Hue Curves would provide a significant amount of workflow improvement.
  10. Regarding point 3, this is how it is usually handled in most professional graphics software. Save and Save as are for native file formats that can be saved and re-opened without any loss of data and everything intact. Export is what you use when you want to get data out of the software for delivery or interchange with other software only. One big reason why it makes sense to keep these separate is that you don't want to choose a format that does not support the full Affinity document model by accident when you do "Save as". If you did, your document would still be marked as "saved", and when you close the document, the software wouldn't ask you if you want to save it, meaning you would lose all the information that the file format you chose doesn't support. Let's say you created an unsaved document and did some work on it, then "Save as" to JPEG, and then closed the document, you'd completely lose all your vector and layer information without any warning. Or even more subtle, if you did a "Save as" of a document to JPEG to send to a client as a proof, and then continued working on the document and in the end clicked "Save" (or pressed Cmd+S by habit), it would not only overwrite your JPEG proof file, it would also again discard all the information that keeps your document editable. Also, new users would have to know which file formats would preserve which editing capability or they would be bound to lose work by accident.
  11. It would definitely useful to be able to use all transform and align tools on the control points as well. Not just the regular selection tools, but also things like flip operations, and possible future features like, say, bicubic mesh warp, selection brushes, Liquify-type warp brushes, noise-based displacement etc. if something like that is implemented. I think these type of features should always be designed from the get-go to work on both control point level and on object level. Most 3D software actually handles that as some sort of global mode switch for toggling between object-level operations and component-level operations (control points, edges, faces etc.), and all tools in the applications support this switch, whereas most 2D software unfortunately limits the control point functionality to basic selecting and moving of points.
  12. I'm not sure about the NeXT being the best choice for Apple at the time – except for the obvious part about getting Steve Jobs back. As davidpwrmc pointed out, at the time, there also was some talk about them buying Be and using BeOS as the foundation for the new MacOS. BeOS had a much more modern architecture, database-based file system that basically had the Spotlight index built-in before Spotlight existed, was completely multi-threaded from the ground up so applications would never lock up (pretty much unheard of in the late 90ies), booted in seconds, and had a clean and really easy to use native C++ API and could play multiple videos in parallel and still render a few OpenGL windows at the same time, which at that time was exclusively the domain of insanely expensive SGI workstations. There were even versions of Steinberg Cubase, Maxon Cinema 4D as well as a few great native programs for the platform before Be was more or less forced out of the market by Microsoft. If you compare that to the current macOS, which is actually a fairly strange combination of ancient Unix, classic Mac legacy, the NeXT Step APIs, completely new Apple tech, and all sorts of other bits and pieces, BeOS actually was in many ways a much more modern operating system than macOS is even today. There are even a few guys who basically re-programmed the entire BeOS from scratch as open source because so many people thought it was such a shame that it went away. I think they are still going at it steadily under the name Haiku, I even saw something about an upcoming Beta release. However, I haven't tried it yet. By the way, the classic OS X Creator and FileType fields are still around even on macOS and there are APIs in place to read and write them (NSFileManager's setAttributes command with NSFileHFSTypeCode and NSFileHFSCreatorCode if I recall correctly). In theory, you could use that to recognize FreeHand files without extensions that were saved on classic MacOS, as long as the file's resource fork was preserved.
  13. This is actually a really smart idea. Later when scripting is added, it would also be nice to have a corresponding "copy history steps as script" command.
  14. Agreed, numeric readouts are pretty much essential. HSL/HSV curves are definitely an extremely useful feature (there even used to be a paid third party plugin for Photoshop I believe because Adobe just never got their act together). However, a horizontal cubic bézier curve like in most video-focused applications (such as DaVinci Resolve) is much more easy to control for HSL/HSV adjustments compared to the Photoshop-type linearly ascending Catmull-Rom-Spline like that PhotoLine seems to used (judging based on the screenshots). In general, for some cases an option to use Bézier curves for other spaces like RGB would also be useful (in this case, the ascending version is fine). One other point that bugs me about current Affinity Curves is that you can't select multiple points and move them together. And that the LAB curves default to the rather useless "Master" instead of, say, the "L" channel. And that switching channels is implemented as an annoying popup that requires an unnecessary additional click every time compared to, say, what Cocoa calls a "Segmented Control". Alpha curves doesn't seem to work for me at all in recent versions, at least not on regular layers. I think it's a bug, but I might just be doing something wrong. It would also be cool to have an option to highlight the zone that is targeted in the image view when moving the mouse over the curves. Similar to a clipping warning or QuickMask overlay, but for a zone of an adjustable number of tones around the one under the mouse. This way it would be immediately obvious which areas are being targeted. Sort of the reverse of what jmoren suggested about plotting the image pixel under the mouse cursor in the curves dialog. As soon as the user starts to drag on the curves, it would have to be hidden so that the adjustment can be judged properly, but then it would come on again after the mouse is released. And of course the standard picker buttons for black point, white point, gray point. In addition, a white balance picker that adjusts the white point (gain) to make the color that was sampled neutral, but keeping the luminosity of the original sampled color the same instead of pushing it to white like Photoshop's white point picker does. Currently, if my image has a color cast that I want to fix by adjusting the gain (white point), I'd have to use a gray point picker and then manually adjust the white point until the curve is linear, which is an unnecessary waste of time. And I'd love another picker mode with two color pots, so you can click one color and then a second, and it would place a curve point appropriately so that the source color will be mapped to the target color. Say you have an area of skin that has some unwanted light spilling on it and you want to match it to a darker, shaded area – just click one, click the second, and you have a perfect match. Or you can create a palette with target skin tones. load one color into the target pot, click the source pot, and you're 90% there.
  15. Definitely – with the current behavior, you always have to check which tool you are in before pressing a shortcut key.
  16. The export options for TIFF files always remember the last setting, rather than defaulting to the document color mode. I.e. if the last export was CMYK, the default the next time will be CMYK, even if the document is RGB, and the other way round. This is dangerous since it makes it easy to perform unintended color space conversions on export. Especially beginners who may not know about RGB and CMYK or just trust the default export settings might be in for a nasty surprise. Ideally, the export settings would check if the color format in the export settings is set to the same thing as the same as the document color format and in that case offer the document color format of the next document the next time. If the user changes the setting to something else than the document color format, it would offer that specific setting the next time, regardless of document format. That way it won't get in the way of, say, someone trying to manually export 10 RGB documents as CMYK, but it also wouldn't catch someone exporting an RGB file to RGB TIFF by surprise if their last export was a CMYK document to CMYK TIFF.
  17. We have had these discussions in other threads already. Automating tasks is possible with Python as well, but Apple Script (which I personally find really confusing in terms of syntax) ultimately means support for platform-specific automation interfaces on the Mac, including support for tools like the discontinued Apple Automator. So depending on the intention of the feature request, Python may or may not be a viable alternative. While I think Python would be fantastic because it is equally well-suited for simple and easy tasks that a beginner with very little programming experience might want to tackle, to complex seamless integration that would require lower-level C++ plug-ins in other software, I think it might be more constructive for us to talk about what type of automation and workflows we are trying to solve, rather than request specific technologies. So looking at the original post, are we talking about requests for AppleScript because of Mac users' familiarity with the language? Or is it a request for a way to automate things where calls have to be made to multiple different programs, rather than just Affinity itself? Or is it simply about having robust scripting capabilities, regardless of language?
  18. As soon as Publisher and multiple pages arrive, this is likely going to be an issue. Even when you're not embedding, but just linking, assuming Affinity will store a preview in the document. You'll get people with terabyte-sized files (not to mention photographers who edit hundreds of high-res pictures per shoot and store layered files). Or take InDesign as an example – if you place a lot of images that have a DPI value of, say, 96 set in the file metadata, it will assume that the image is really big (in terms of physical area) and thus stores a close to full res preview, resulting in extremely large and bloated document files despite image files being linked. I know of course nothing about the Affinity architecture, but I'd expect that file I/O bandwidth would really be the limiting factor and that decompression could basically happen in the downtime when the processor is waiting for the data to arrive from the drive, so the speed penalty should theoretically be minimal. Of course this might be different on platforms like iOS where storage is quite fast. Recent Linux and now also macOS versions have a module that losslessly compresses memory pages before it starts going into swapping because it's still faster to compress and decompress data in memory on the fly than to save to and load uncompressed chunks from disk. I've always wondered why Adobe hasn't added anything like that to Photoshop's memory manager, but I guess the dev team is busy with stuff like coding HTML5 skins like Design Space. Unless there is really a major slowdown associated with this, I'd definitely say that there should be an option (or alternatively a good behind the scenes mechanism) to stores the original compressed version in Affinity files instead of full uncompressed raster data. That way, it would also be possible to "unembed" the original image data, and not a huge uncompressed file or a lower-quality re-compressed one.
  19. When Option/Alt-Clicking a layer/object/mask to isolate or "solo" it in the viewport, there is currently no visual feedback that the application is in this mode. It would be nice if the Layers panel would display a special outline or highlight the level in a specific color to make it immediately obvious that this mode is active. I don't know if this is a document-wide setting or one that lives in the view, but if the latter is the case (i.e. if it is possible to have two windows of the same document open and only one of them in isolation mode for a particular layer), I'd suggest that the Layers panel always reflect the state of the view that currently has the focus.
  20. Most professional publishing software actually have a counterpart for text editing that is intended for the editors in an editorial workflow, like for newspapers and magazines. For InDesign, there is InCopy, for Quark XPress there is Quark CopyDesk. Editors can write their articles while accessing paragraph and character styles and seeing the text in context of the layout, including accurate text wrap. There are also locking mechanisms so people don't overwrite each other's work. Collaboration happens either via a file server or via something like Dropbox. These programs actually make fairly decent word processing programs, though they are of course not optimized for it, hence tasks like placing images, setting margins, and other layout operations are very limited. As Affinity is going to be a professional suite, I expect that at some point in the future, there will be such a software that will take what I imagine would be a text editing persona from Publisher into a separate application, intended for editors collaborating with the layout department. From my perspective, I think this should be a free or at least extremely low-cost program (say, 4,99€). That way individual designers could have clients or copy writers install that program, allowing these people to work on text inside the layout before the designer then makes final formatting touches. Much better than getting new Word files every day and manually re-applying all formatting in situations where there are a lot of frequent copy changes, but work has to start on the layout. It could also serve as a free viewer for Affinity file formats that would allow people who don't have Designer/Photo/Publisher to display, export and print Affinity file formats. I think a system like this could have one other really important use. For instance, many Designers are in the situation that the client has to come back to them for every single copy edit. Say, you are designing a brochure for a car company, and you want every dealer to have their address and their individual prices in there. With such a copy editing/viewer program, the designer could mark the corresponding text boxes as free to edit in the copy editing software. And then every single dealer could easily print or export customized versions every time their prices change (or address). There are solutions for InDesign out there, but they usually involve complicated and expensive server-based solutions that only make sense for large enterprise workflows. If Affinity Publisher/Designer/Photo were to provide a simple solution for that, that would be a godsend for many smaller workflows.
  21. Since the Focus Merge process must determine the Z order of the photos and which pixel to use from which image, that means that it is implicitly reconstructing depth information for each pixel anyway. It would be nice to have access to this data, either for further editing (such as enhancing perceived depth by applying color correction), or even to use it, say, as a displacement map in 3D-capable software to generate a textured 3D model from the image. I'd suggest a simple "Generate Depth Map" check box in the Focus Merge dialog box that would then lead to Focus Merge outputting a grayscale depth map as either a separate layer or a channel.
  22. In general, I think it would be useful if all color corrections (Adjustment Layers), not just black and white conversions, could optionally be applied to vectors, i.e. baked into the vector stroke and fill colors instead of being essentially a raster post effect. Similar to the "Merge" command, only for merging into vectors.
  23. I've been wondering why Affinity Photo now has a fantastic projection mode for editing 360 degree equirectangular panoramas, but no way to actually stitch/produce them. I thought I was missing something. I agree that this would definitely be a useful addition.
  24. Agreed, this has been an issue in Photoshop for years. Ideally, these keys would be bound to the keyboard layout so that even when you're changing between different keyboard layouts a lot while the application is running (like say, German, US English and Japanese), these types of shortcuts will still function reasonably well and not suddenly become impossible to press.
  25. There are several different brush types in Affinity, and unlike I am missing something obvious, it appears to me that the raster brushes used in Affinity Photo can only be used with brush tools. It would be fantastic to be able to apply these non-destructively to vector paths. Essentially like a non-destructive version of Photoshops "Stroke Path" feature. This is something that for instance Macromedia Fireworks and Discreet Combustion were always able to do, but for some reason it was never added to Photoshop. Possibly, the applications could also support adding multiple brushes (or strokes in vector terms) to the same path, opening up even more possibilities. In addition to the obvious benefits, having raster brushes on vector paths would also allow the brush engine to be used more like a particle system, another long-standing omission in Photoshop's feature set that would be great to see in Affinity. NOTE: There is a "Texture Line Style" setting in Affinity Photo's path stroke options, but this doesn't seem to work with the raster brushes, counter-intuitively not even with those in the "Textured" category in Photo's brushes panel. I assume that this requires Designer brushes (as these work when applied in Designer in the same place), but those are not available inside Photo as far as I can tell. As such, I find this a bit confusing, and it seems that I'm not alone. Some kind of note in the Stroke popup under the "Texture Line Style" that lets the user know that they need to use Designer to apply Texture stroke brushes would be useful.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.