Jump to content
You must now use your email address to sign in [click for more info] ×

Medical Officer Bones

Members
  • Posts

    656
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Medical Officer Bones reacted to kirkt in How to reduce colors in Photos to create C64 looking Photos?!   
    Here is a link to the palette of 16 colors that the C64 had available for display:

    and
    https://www.c64-wiki.com/wiki/Color
    You can use this to construct a custom palette in an application that supports such things and see if that works for you.  I encountered the long-standing lack of a way to specify a custom palette for GIF export as well and got the same error as in the thread to which @Medical Officer Bones linked.
  2. Like
    Medical Officer Bones got a reaction from kirkt in Node-based UI for AP. Please?   
    In my experience no layer-based image editor is perfect. PhotoLine does support pretty much a full non-destructive workflow with 32bpc images, and, like Photoshop, fully implements a "smart objects" ("placeholder layers") workflow, up to the point of allowing external PS plugins to be applied as live filters, and using placeholder layers as masks for other placeholder layers, and live instancing of layers. Krita also supports instanced layers, and a non-destructive filter layer.
    The thing that really bothers me about Photoshop is its reliance on clipped layers to create stacks of combined masks. It just works better to allow multiple (grouped) layer masks.
    But the issue remains that layer-based editors do slow down at some point, and things become rather complex fast: a nodal approach is often easier and more effective to work with when things heat up in terms of complex compositing. I would love to see a nodal layer of some sort to be implemented.
    If memory serves me, I recall a mac-based image editor that (years ago) implemented a kind-of stackable puzzle approach on top of the traditional layer stack. I forget its name; it was quite intriguing, but development stopped at some point.
  3. Like
    Medical Officer Bones reacted to kirk23 in Node-based UI for AP. Please?   
    I would be happy  with just any way to provide non destructive image editing.  Node based or not.   Up to Photoshop level at least   where you can use "groups clipping"  and separate stack  of smart objects as non-destructive masks for another smart object.   Recent addition of  accessing smart object internal layer compositions  up to main document  layer comps also adds to this .  
    In fact Photoshop in its current state could be truly and completely non-destructive.     If it would be supporting of 32 bit floating point image editing  same way Affinity photo does at least.   
    Still Photoshop  is slow as hell once your stack is complicated enough to be doing anything  non destructive. That's my guess why nobody use it this way.   Affinity shows much more hope in that regard but lacks even basic features like "chain" transform links.   Layer comps, live  effects/filters are also pretty limited comparing to Photoshop.
    So yeah, if we could have a live filter same as "procedural" one  with node based UI  it would be super helpful.     But at first for such new nods driven layer    we need the soft getting access to content of other layers as initial inputs  for such new node based layer calculation flow.
    My main soft of image editing is Substance Designer  currently .  But my gosh  it's so monstrously , so tremendously inconvenient  in its every piece, every detail.  You have to redo every node and every tool with "pixel processors" and recollect long forgotten vector algebra and trigonometry to do so.   It's so monstrously artist unfriendly in its every UI detail  I am always feel nostalgic  about layers there.     A macaroni monster Gordian node as I call it.          So my hope we could have best of both ways  layers and nodes  in both Photo and Designer  
      
  4. Like
    Medical Officer Bones reacted to kirk23 in Node-based UI for AP. Please?   
    Thanks Medical Officer Bone     for reminding me  about Photoline.  It's promising  for sure.    They probably have done quite a progress since last time I tried it.   And I agree Phs "group clipping"  is a clunky way to introduce non-destructiveness.  I often couldn't figure out a thing in my own mess of groups and smart objects there.    Photoline  was definitely a touch of simplicity and elegance  in that regard.
      I recall a lack of transform "chain " links  what have annoyed me.  I tried to perform typical   depth based objects/layers combine trick  and lack of  "chains"   made it fall apart every time I need to re-compose something.  
    Same problem  in APhoto , no links or multi selection, and if you touch something  accidentally  it's breaking.  I re-subscribed back to Photoshop because of that.
    Still  every time I work in Substance Designer  I miss an ability to select  some object of screen  instantly,  move/scale it around,  paintbrush a mask or vector shape and scatter some vector based particles along a vector spline  manually.      SDesigner svg and bitmap paint nodes are absolutely  horrendous and unusable.
    Whatever mess of group clipping Phs may have   a Gordian node of those  SDesigner nodes with its connection lines is just on totally different level.
    I wish Affinity Designer  could simply use Substance Designer  sbs   files  or Filter forge ones  as live filters.   But  the issue of inputs still stays.    We usually need inputs from other layers too  to do something meaningful with nodes
    I also use Blender compositing mode a lot since they introduced   "cryptomatte"  last year .  Perhaps some kind of a bridge  with Affinity soft  could be  cool too.   In general  I love Blender's approach to node editors. Imo they are the best I saw.  Much easier to work with  than both SDesigner and FForge ones.
    I suspect one day Blender may become quite a solution for image editing too.  2d or 3d, it's all the same basically     
  5. Like
    Medical Officer Bones got a reaction from Aftemplate in Node-based UI for AP. Please?   
    In my experience no layer-based image editor is perfect. PhotoLine does support pretty much a full non-destructive workflow with 32bpc images, and, like Photoshop, fully implements a "smart objects" ("placeholder layers") workflow, up to the point of allowing external PS plugins to be applied as live filters, and using placeholder layers as masks for other placeholder layers, and live instancing of layers. Krita also supports instanced layers, and a non-destructive filter layer.
    The thing that really bothers me about Photoshop is its reliance on clipped layers to create stacks of combined masks. It just works better to allow multiple (grouped) layer masks.
    But the issue remains that layer-based editors do slow down at some point, and things become rather complex fast: a nodal approach is often easier and more effective to work with when things heat up in terms of complex compositing. I would love to see a nodal layer of some sort to be implemented.
    If memory serves me, I recall a mac-based image editor that (years ago) implemented a kind-of stackable puzzle approach on top of the traditional layer stack. I forget its name; it was quite intriguing, but development stopped at some point.
  6. Like
    Medical Officer Bones got a reaction from flc_ in PNG/JPG Export Quality   
    Some confusion regarding web resolution, PPI/DPI, how to prepare for the web, and such.
    DPI is completely irrelevant for web and screen (mobile/tablet/desktop) work. Forget about DPI or PPI. It tells nothing about the actual resolution of a file. 1 pixel can be saved at 50000ppi, and a million pixels at 1ppi. It is merely a parameter that tells print software at what relative size it should be printed.
      Technically DPI is the wrong term in Affinity's dialogs. PPI (Pixels Per Inch) is the correct term.
      For web/screen export, think PIXELS. When designing for screen output only pixels count. Forget PPI! Screen tech and software ignores that parameter.
      In the early BFP (Before Flat Panels) resolution of screens was more or less related to the size of those screens. The larger the CRT screen, the higher the possible resolution. 1 pixel equated to 1 pixel. I was very simple to calculate the required resolution/pixel dimensions of an asset for a web page: just export at the exact pixel size required.
    For example, if a logo had to be placed at 600px by 100px, that is what you exported and prepared it at. PPI was (and still is) entirely irrelevant.
      Things changed quite a bit AFP (After the introduction of flat screen technology). No longer can we relate the physical size of a screen to its physical pixel resolution.
    The first iPad had a 1024x768 resolution. The iPad 3 introduced the retina screen, which offers double that: 2048x1536. The screen size did, however, not change.
    Nowadays much smaller screens than an iPad screen are capable of displaying higher resolutions than that.
      Desktop screens display 4K or even 8K now, but the actual physical dimensions of those screens are similar or the same as the previous generations.
      This poses a problem to screen/web designers: at what pixel dimensions should assets be produced?
    To solve this problem the concept of MULTIPLIERS was introduced by screen designers.
      The multiplier tells the designer at what pixel dimensions a bitmap asset should be exported. The goal is to hit the native pixel resolution/dimensions for each targeted platform/screen (or close to that resolution).
    Example iPad 1: assets are exported at the native resolution (related to 1024x768 pixel screen). We prepare @1x: 600px by 100px.
    Example iPad >3: this is a device for which we prepare @2x assets: 1200px by 200px
    Result: we deliver two assets when our target platform is the regular iPad.
      For web export, at least @1x and @2x assets are required. When using the correct responsive <img> tag code, a browser will load up the correct version depending on whether the screen it is viewed at is retina or not.
    At this point the screen designer must realize that only providing a @1x asset will result in fuzzy looking bitmap graphics on a retina screen.
      But many handheld devices have far higher PPI resolutions (recall that PPI tells us about the relationship between the screen size and the native screen resolution). Very small screens at incredible high resolutions. @3x, @4x, and even @5x.
    How do we figure out the multipliers?
    Answer: we check the configurations, and calculate the multiplier. Luckily, someone already did this for us:
    https://material.io/resources/devices/
      This means that BEFORE designing any bitmap screen asset, the designer MUST decide what the highest target multiplier must be.
    And create the bitmap asset at that highest resolution.
      At this point a "pixel" as a unit is unsuitable as a base unit when discussing the dimensions of a bitmap asset. Therefore, DP and PT units were introduced. DP ("dip"(s)) is a Google coined 'unit'. PT (point(s)) is Apple's preferred 'unit'.
      Example: We need to prepare a 600px *100px bitmap asset. When we communicate this to our fellow designers and developers, we no longer use pixels, but either dips or points. 600dp/pt by 100dp/pt.
    Next, we must decide on the highest multiplier: with @3x we target most devices right now.
    We then define the highest resolution with the formula 3x600 and 3x100.
    We create a new document at 1800px by 300px, and work at this base resolution.
    This is required for all our assets in this project.
      When the final asset is to be delivered, export all multiplier versions with a standardized prefix or suffix which indicates the multiplier.
    In our example above, that means three bitmap assets: logo.png, logo@2x.png, logo@3x.png

       
  7. Like
    Medical Officer Bones got a reaction from loukash in PNG/JPG Export Quality   
    Other recommendations:
    if possible, export as vector SVG files. In this case there is no need to worry about multipliers. Do make sure the SVG code is responsive, and automatically scales up and down in a HTML tag container.
      Even better, if you deal with GUI elements, work with the built-in framework GUI options. For example, instead of exporting that flat button design as a bitmap, use SVG. But if that same button design can be replicated using HTML and CSS styling code, definitely run with that last option.
    Built-in GUI components tops SVG which tops bitmaps.
      If it can be helped, never use JPG for sharp looking text, line art, logos, vector work, etc. JPG works well for photos and multi-tone images, but visibly degrades those aforementioned types of graphics due to the lossy compression algorithm
      Instead, use PNG. If your work consists of less than ~256 colours and tints, indexed PNG files are preferable and save file space. The best way to compress a PNG is worth an entire post by itself. Suffice to say, to best compress and optimize PNG files dedicated tools are required. Image editors are not that great at it. I use ColorQuantizer, which is (in my opinion) the best PNG optimization tool currently available. Only for Windows, though. But worth it and free! http://x128.ho.ua/color-quantizer.html
      WebP is also excellent for both photos as well as sharp non-lossy assets (WebP support both lossy and non-lossy), and supported by all major browsers now. Except, of course, Apple. Safari does not support it. 😞 Neither does Affinity, which only opens these files, but does not export to webp.  
    @Ali1 Your original issue has to do with attempting to export a 600x100 px asset "as is" from a vector package. In my experience there is just NO chance to get a good quality acceptable result, even in Illustrator. This is caused by a couple of things, such as non-decimal positioning, which adds unwanted soft anti-aliasing to the edges.
    What works well to maintain high-quality sharp looking anti-aliased low resolution assets are the following steps:
    work at a 3 to 4 times higher resolution. export at that bitmap resolution. Optional: perform some pre-downscale sharpening. open the result in an image editor like Affinity Photo, and scale down to the required lower resolution. Optional: perform some post-downscale sharpening. the result will be much better looking. Ideally use a downsampling algorithm such as Catmull-Rom. This works extremely well to maintain sharp details. Unfortunately, most image editors do not support this.
    Of course, if you are already working at a @3x or @4x multiplier, the above steps generally are not required, because your base resolution is already very high. But it depends a bit on the image editor: the only way to check for quality is to go though the above steps at least one time, and compare your manually exported version with the automatic ones.
  8. Like
    Medical Officer Bones got a reaction from ashf in Anti-alias aware fill   
    The Anti-alias option in Photoshop (or in Gimp for that matter) is not the same as a dedicated overfill feature. The anti-alias option isn't actually a good solution, and often delivers not-so-great to unusable results. The application either needs to be 'aware' of anti-aliasing and overfill it by default (ClipStudio), and/or a dedicated overfill option is included.
    Anti-aliasing by itself generally doesn't solve all use cases satisfactorily.
  9. Like
    Medical Officer Bones got a reaction from Alfred in Anti-alias aware fill   
    It is unfortunately one more paper cut in Affinity. Two paper cuts in this use case, actually.
    The first issue with the fill tool is the lack of anti-aliasing/alpha awareness or the lack of an overfill option.
    In other software this is solved by either including the alpha of pixels by default, or including an option to control the overfill. In Krita it is called "Grow Selection". ClipStudio Paint does it automatically by overfilling transparent pixels (because who would want ugly aliased fills!). PhotoLine fixes this with an "Overfill" option. Art Studio has a "smart fill" option.
    Gimp, Photoshop, and Affinity Photo rely on the tolerance threshold to address this, but it is less than ideal, and often doesn't work as expected, or requires more fiddling around. Affinity's threshold seems the most finicky of all in my experience.
    The second issue with Affinity Photo's fill tool is the lack of a "read merged" or "all layers" option in the Source drop-down list. Here is why: suppose the artist wants to quickly fill areas of comic artwork. The default workflow is to put the line art in a layer above the fill layer, then fill areas. This can be done in several ways, and one quick method is to just use the fill tool. The artist then chooses "sample merged" or "(sample) All layers", sets the overfill (depending on the software) and fills the area. 
    This workflow is unavailable in Affinity Photo, however: the Source list only includes "current layer & below", "current layer", and "layers beneath". It is literally impossible to fill comic art using this very basic approach in Affinity Photo.
    ClipStudio Paint offers the best controls and has options that range far beyond the "sample all layers", which is great. Krita has a nice "Color labelled layers" aside from the "All layers" option to precisely define which layers to sample from. PhotoLine, Gimp, Photoshop: these all have the "All Layers" option.
    Yet Affinity Photo lacks any such option, rendering the fill tool pretty much useless for such work (including bringing in technical drawings as line art, btw! Not only comic art jobs!). Instead, the fill layer must be placed ON TOP of the line art, which is very, very awkward.
    The Affinity devs should take note: add an overflow option, and allow the user to sample the layer(s) above the fill layer. Without these two very basic options the fill tool is pretty much useless for anything beyond simple tasks. At the very least add the "all layers" source option to have feature parity with Photoshop and Gimp.
     
     
     
  10. Like
    Medical Officer Bones got a reaction from Stocker.jp in WebP in Affinity Photo   
    Well, now that Apple announced that Safari will be supporting the webp format, I see no reason for the Affinity devs to hold off on supporting the export of it.
    https://www.macrumors.com/2020/06/22/webp-safari-14/
  11. Like
    Medical Officer Bones reacted to ashf in Mac Transition to Apple Chips   
    iPad version is already matured and it's got almost the same functionality with Mac's
    So I'm not surprised even if Serif is already building Arm version.

    Also since there's no Arm Mac yet, there's plenty of time to prepare for it.
  12. Thanks
    Medical Officer Bones got a reaction from CLC in Free Transform, Perspective & Warp Tools   
    I just dragged and dropped a RX100-VII ARW file into the window,, which is a newer model, and it worked - so your file should be supported as well. Btw, PL's raw developer is non-destructive. It is also possible to drag a file into the window while holding down the alt/option modifier key to create a externally linked smart object (called a "placeholder" layer.
    ...getting off-topic now. I hope free transform of vector objects/layers will be added by the time v2.0 is out. And a non-destructive Blend/morph option. And true vector brushes. And vector patterns.  And many other basic features still missing.
  13. Like
    Medical Officer Bones got a reaction from Boldlinedesign in 1bit / bitmap mode colour format?   
    1-bit mode would be very welcomed.
     
    The only image editor that I am aware of that supports 1bit layers with high PPI resolutions and allows these to be combined with 300ppi colour ones in the same layer stack is PhotoLine.
     
    Crossing my fingers for Photo's next version that will hopefully support 1bit properly.
  14. Like
    Medical Officer Bones got a reaction from Efvee in 1bit / bitmap mode colour format?   
    True, true. InDesign, Quark, (even the old Freehand, I recall) and PhotoLine all support this.
    As I said, Affinity remains crippled for a wide range of print work if the developers maintain their stance. Bit of a shame, really.
  15. Like
    Medical Officer Bones got a reaction from Michael Lloyd in Affinity not for webdesign? No webp? Still?   
    Agreed. Webp is a unique format in that it combines both lossy image compression combined with alpha transparency. It is a great format for 2d hand-drawn game graphics, for example, for which I use the format. PNG takes up too much file space, and the art looks just fine with a lossy compression.
    Jpeg wouldn't work since it does not support transparency/alpha. On a web page often a lossy compressed image with full alpha works just as nicely as that PNG version, but hugely saves on bandwidth. To reduce a PNG file in file size, colour reduction is the only option, but it can only be taken so far before it degrades the image too much.
    For example, a typical asset that would be reduced to 1024 colours with alpha transparency at ~600x600px would take up around 130KB file size after running it through a PNG optimization power tool (forget saving an optimized PNG version from Affinity Photo). Saving this same asset at full colour in a lossy webp format results in 55kb. Running the 1024 colour version as a webp version shaves off even more.
    And both the alpha and colour data can be independently processed in Webp, offering a lot of optimization potential.
    By now all major browsers support the file format. Only Apple obstinately refuses to do so. Webp potentially saves a lot of bandwidth, and thereby a lot of energy.
    By not supporting webp export, Affinity is only shooting itself in the foot.
    But all is moot anyway: Affinity still to this day does not allow the user to preview what the assets look like optimized. I can't even consider Affinity for image optimization work unless that is implemented. And quality/export control over PNG is terribly limited in Affinity as well (and to be fair in most design apps), so I use Color Quantizer (a dedicated PNG optimization tool) to perform the final optimization step. It will also export Webp.
    To be entirely honest, the entire export persona in Affinity is not that useful to me in its current state. But I do confess to be a complete nitpicking asset optimization nutcase! So it is probably works just fine for the average user.
  16. Like
    Medical Officer Bones got a reaction from Riccardo B. in Affinity not for webdesign? No webp? Still?   
    Agreed. Webp is a unique format in that it combines both lossy image compression combined with alpha transparency. It is a great format for 2d hand-drawn game graphics, for example, for which I use the format. PNG takes up too much file space, and the art looks just fine with a lossy compression.
    Jpeg wouldn't work since it does not support transparency/alpha. On a web page often a lossy compressed image with full alpha works just as nicely as that PNG version, but hugely saves on bandwidth. To reduce a PNG file in file size, colour reduction is the only option, but it can only be taken so far before it degrades the image too much.
    For example, a typical asset that would be reduced to 1024 colours with alpha transparency at ~600x600px would take up around 130KB file size after running it through a PNG optimization power tool (forget saving an optimized PNG version from Affinity Photo). Saving this same asset at full colour in a lossy webp format results in 55kb. Running the 1024 colour version as a webp version shaves off even more.
    And both the alpha and colour data can be independently processed in Webp, offering a lot of optimization potential.
    By now all major browsers support the file format. Only Apple obstinately refuses to do so. Webp potentially saves a lot of bandwidth, and thereby a lot of energy.
    By not supporting webp export, Affinity is only shooting itself in the foot.
    But all is moot anyway: Affinity still to this day does not allow the user to preview what the assets look like optimized. I can't even consider Affinity for image optimization work unless that is implemented. And quality/export control over PNG is terribly limited in Affinity as well (and to be fair in most design apps), so I use Color Quantizer (a dedicated PNG optimization tool) to perform the final optimization step. It will also export Webp.
    To be entirely honest, the entire export persona in Affinity is not that useful to me in its current state. But I do confess to be a complete nitpicking asset optimization nutcase! So it is probably works just fine for the average user.
  17. Like
    Medical Officer Bones got a reaction from lepr in Affinity not for webdesign? No webp? Still?   
    Agreed. Webp is a unique format in that it combines both lossy image compression combined with alpha transparency. It is a great format for 2d hand-drawn game graphics, for example, for which I use the format. PNG takes up too much file space, and the art looks just fine with a lossy compression.
    Jpeg wouldn't work since it does not support transparency/alpha. On a web page often a lossy compressed image with full alpha works just as nicely as that PNG version, but hugely saves on bandwidth. To reduce a PNG file in file size, colour reduction is the only option, but it can only be taken so far before it degrades the image too much.
    For example, a typical asset that would be reduced to 1024 colours with alpha transparency at ~600x600px would take up around 130KB file size after running it through a PNG optimization power tool (forget saving an optimized PNG version from Affinity Photo). Saving this same asset at full colour in a lossy webp format results in 55kb. Running the 1024 colour version as a webp version shaves off even more.
    And both the alpha and colour data can be independently processed in Webp, offering a lot of optimization potential.
    By now all major browsers support the file format. Only Apple obstinately refuses to do so. Webp potentially saves a lot of bandwidth, and thereby a lot of energy.
    By not supporting webp export, Affinity is only shooting itself in the foot.
    But all is moot anyway: Affinity still to this day does not allow the user to preview what the assets look like optimized. I can't even consider Affinity for image optimization work unless that is implemented. And quality/export control over PNG is terribly limited in Affinity as well (and to be fair in most design apps), so I use Color Quantizer (a dedicated PNG optimization tool) to perform the final optimization step. It will also export Webp.
    To be entirely honest, the entire export persona in Affinity is not that useful to me in its current state. But I do confess to be a complete nitpicking asset optimization nutcase! So it is probably works just fine for the average user.
  18. Like
    Medical Officer Bones got a reaction from ashf in IMac 27 Retina 5K or HP Envy 32 All in One ?   
    Yes, read about that today.
    It is supposed to have a thin bezel, no fusion drive (good riddance) but SSD as a standard, and perhaps the new AMD graphics. Probably at least 16GB as standard too, and updated Intel chips.
    If so, definitely worth the wait - next week at the WWDC we should learn more.
    @Bugiardini I would wait a bit and see what is announced for the iMac lineup next week.
  19. Like
    Medical Officer Bones got a reaction from Oval in IMac 27 Retina 5K or HP Envy 32 All in One ?   
    I worked with Macs, Windows, and Linux computers all my life. It's an emotional over-simplification to state that Windows runs slower than a Mac or Linux on the same machine. For example, Windows is great for gaming and 3d work, while the video drivers on Macs generally aren't that great or optimized. Pure rendering tasks (such as video and 3d rendering) rely on hardware for speed. But optimized software can make a huge difference as well: eCycles (an optimized Windows-only version of Blender's Cycles) renders extremely fast on WIndows and a RTX Nvidia card. That option is not available on a Mac. And Metal is turning out to be less of a performance miracle than Mac proponents hoped for.
    Affinity runs better on Mac computers - but other software does not, and runs better on Windows, or equally well. It is too simple to say that MacOS runs software smoother than Windows, or Linux, etc. Fact is that for certain jobs Macs aren't that great a choice (3d work or gaming for example). Fact is that the latest MacOSX is a bit of a train-wreck. Apple is working out the kinks, though.
    And more powerful hardware does make a difference, even if one OS is more optimized than the other one. A CPU that runs twice as fast as another one will run any OS or task faster, even if it is less optimized to run on that hardware. The difference between Envy's i7 and an iMac's 27" 5K i5 CPU, together with a much more powerful video card and more RAM, as well as a 512GB SSD is going to make a painful difference for heavy duty graphics jobs - but perhaps less so in Affinity apps.
    But you have to ask yourself the question whether you want to invest MORE money in yesterday's hardware rather than current up-to-date hardware. The iMac 27" 5K machine is not that convincing a product at this point in time. And if you would opt for a Windows desktop machine, the hardware difference would be even MUCH more pronounced and obvious.
    That iMac is in dire need of a hardware update. Sweeping the obvious hardware advantages of a competing product aside at a far lower price by justifying that the OS supposedly runs more smoothly is clutching on a straw.
    But I would agree that everyone has their own OS preference. If you cannot live with Windows, then the iMac is a fine machine.
  20. Like
    Medical Officer Bones got a reaction from Jowday in Affinity not for webdesign? No webp? Still?   
    Agreed. Webp is a unique format in that it combines both lossy image compression combined with alpha transparency. It is a great format for 2d hand-drawn game graphics, for example, for which I use the format. PNG takes up too much file space, and the art looks just fine with a lossy compression.
    Jpeg wouldn't work since it does not support transparency/alpha. On a web page often a lossy compressed image with full alpha works just as nicely as that PNG version, but hugely saves on bandwidth. To reduce a PNG file in file size, colour reduction is the only option, but it can only be taken so far before it degrades the image too much.
    For example, a typical asset that would be reduced to 1024 colours with alpha transparency at ~600x600px would take up around 130KB file size after running it through a PNG optimization power tool (forget saving an optimized PNG version from Affinity Photo). Saving this same asset at full colour in a lossy webp format results in 55kb. Running the 1024 colour version as a webp version shaves off even more.
    And both the alpha and colour data can be independently processed in Webp, offering a lot of optimization potential.
    By now all major browsers support the file format. Only Apple obstinately refuses to do so. Webp potentially saves a lot of bandwidth, and thereby a lot of energy.
    By not supporting webp export, Affinity is only shooting itself in the foot.
    But all is moot anyway: Affinity still to this day does not allow the user to preview what the assets look like optimized. I can't even consider Affinity for image optimization work unless that is implemented. And quality/export control over PNG is terribly limited in Affinity as well (and to be fair in most design apps), so I use Color Quantizer (a dedicated PNG optimization tool) to perform the final optimization step. It will also export Webp.
    To be entirely honest, the entire export persona in Affinity is not that useful to me in its current state. But I do confess to be a complete nitpicking asset optimization nutcase! So it is probably works just fine for the average user.
  21. Like
    Medical Officer Bones reacted to Jowday in Affinity not for webdesign? No webp? Still?   
    To my knowledge webp is mostly about the quality to size ratio - offering faster page loads and significantly smaller transfers of data between servers and visitors. Both as a replacement of PNG and JPG. "Media presentation" is a lot of things.
    We have around 50 million visitors a year on our website and although we utilize SVG more and more for symbols we have to use a lot of jpg's and some PNG illustrations. The smaller the better. The bandwidth used on displaying these images to 50 million visitors is significant. We always have bigger fish to fry than optimizing images but from time to time our architects mention images, page load times and the extreme peaks of visits we have more and more often. When we do have to optimize images the winner will be the smallest image file size.
    Different scenarios - different file formats.
    I remember when Google removed a few chars from their Google Search page and saved quite a bit of bandwidth yearly. An amazing amount, actually. Bits matter in both ends. The future is money and speed too.
  22. Like
    Medical Officer Bones reacted to JET_Affinity in DXF or DWG file import in Affinity Designer   
    I've used Canvas since it was a Macintosh Desk Accessory. Its primary differentiator had nothing to do with 'CAD', but that it combined raster and vector editing at the object level, as opposed to treating them as separate 'layers' like Silicon Graphics SuperPaint. Canvas is and always has been a general-purpose illustration and design program, squarely in the same category as FreeHand, Illustrator, Draw, and all the others.
    Deneba's marketing just never acted 'ashamed' of its being suitable for technical-commercial illustration, as if that's some kind of red-headed stepchild, like most other vendors in this category do. It later turned that into its 'niche' marketing theme. But the program is not really as niche as its marketing suggests.
    Canvas's interface  style is 'dated' much in the same way that Inkscape's is: merely in regards to the fadish blacked-out everything that has become the defacto standard these days, which I'm convinced just spins off from the aesthetics of the video game generation. That's a fad which itself has become cliche and dated, and I'll be more than happy to see it fade away. (It's as bad practice to do serious graphics work in dark environments as it ever was.) But just as in Inkscape, that has nothing to do with functionality.
    The more significantly 'dated' aspect of Canvas's interface is organizational metaphor. For example, 'Inks' and 'Pens' are arguably more metaphorically intuitive than 'Swatches' and 'Strokes,' but not to those now long accustomed to Illustrator and all the brands that incessantly mimic it.
    Affinity is doing just that, in principle. Canvas's marketing has long touted its…um…affinity for technical illustration. But, for example, browse its feature set and show me what's actually there expressly supportive of axonometric drawing.
    But here's the deal regarding Canvas:
    I rejoiced upon hearing that venerable Canvas had finally escaped the stifling clutches of ACD. I immediately thereafter abandoned it altogether when its new management foisted the Adobe-esque rental-only licensing scheme. So here is an over 30-year advocate of Canvas who will never pay another cent toward its continuance.
    No, Affinity Designer does not yet have a DXF import filter. But I'm confident it will, simply on the basis that it clearly does not eschew technical-commercial drawing discipline. It's just a matter of priority.
    You want to talk about Canvas? Has anyone here tried Corel Technical Designer? (A program I do still pay for because it so far does not force-feed that money-for-nothing marketing scam)? Do you realize that Affinity's axonometric grid feature is much like that program's (slightly earlier) similar feature, at a cost of about 8% as much? So no, Serif is not afraid of providing for tech-ish commercial illustration.
    It's not helping 'the cause' to continually trot out the 'CAD word.' I dare say most users of mainstream vector drawing programs have never done any drafting, and are turned-off by (if not downright fearful of) any mention of it.
    Why do we need DXF? It mostly boils down to this: Generally speaking, CAD programs don't export flattened drawings of their models as Bezier curves. They export curves as dumbed-down, penup-pendown-moveto faceted polylines in an increasingly archaic format called DXF that effectively undoes the supposed resolution independence of vector-based paths in the first place. It's needed for the sake of commercial illustration, not for the sake of CAD. That's what users with little-to-no CAD experience need to understand.
    The format itself is pretty lame. But for a decent mainstream general-use vector drawing program to work with it efficiently, other features are needed. You need a good flood fill feature. You need a really good join and smooth feature, hopefully (since this is the 21st century, after all) with at least some shape recognition capability. (Want to know how many times I've had to tediously 'inform' the drawing that those holes in the frame rails are closed ellipses?)
    So my hope, as always, is that delays for features in Affinity really do stem from its developers' desire to do something better than standard-fare, and their understanding that well-implemented features are not standalone, but need to integrate well with the rest of the feature set. That's how an elegant program becomes more than just the sum of its individual features. Doing that requires systematic priority.
    JET
  23. Like
    Medical Officer Bones got a reaction from ashf in DXF or DWG file import in Affinity Designer   
    Does anyone here remember the old Canvas? It used to be a design application that combined image editing, vector editing, and DTP all in one package.
    It actually still exists, and the reason why I bring it up is because the current owners/developers redirected their efforts towards CAD users (architects, product designers, etc.). It is mostly still the old Canvas, though: "sprites", and old archaic looking image adjustment windows which look like Photoshop 6 (NOT CS!) clones.
    But it is totally targeted towards technical manuals, specs, diagrams, etc. output. It will import just about any CAD format.
    For fun, I downloaded the trial version, and I am amazed at how little has changed under the hood in regards to the basic usability of the program - it feels extremely old-fashioned in places (the sprites! OMG I remember using those in the nineties!). The core functionality doesn't seem to have changed much, but a LOT of CAD/GEO stuff got bolted on top of it since last I used it. The GUI, however, is pretty much exactly as I remember it from over two decades ago.
    To be entirely honest, it feels a bit like a failed experiment. It has animation documents, but incredibly primitive animation options. It offers master pages, but it seems only one (1) master page per document is available. It is pretty much everything I expected from a CAD community targeted application, and would immediately turn off graphic designers because of its GUI.
    Anyway, I think it wouldn't hurt if a couple of CAD formats were supported by Affinity. Although to attract CAD/GEO people away more functionality and precision ought to be added. Merely import of DWG and DXF isn't going to cut it. Learn from CanvasX, and improve on it, because if such an old-fashioned design app is able to grab a fair chunk of that market.... There is definitely a market out there for a newcomer.
  24. Like
    Medical Officer Bones got a reaction from ashf in Affinity 3D ??   
    Serif would be committing commercial suicide if they did. Blender, anyone? Free, powerful, innovative.
    As long as Photo gets some 3d tools for texturing it will be fine.
  25. Like
    Medical Officer Bones got a reaction from lepr in Best technique to revert Unsharp Mask?   
    Exactly! Node-based compositors generally have way more non-destructive and flexible options for image editing. When I explain this concept to users familiar with node-based editing, they generally immediately understand what I am talking about.
    It is a cool trick in your toolbox to have access to. I wish and hope that Affinity will consider implementing an expanded layer opacity range as well, because it is forward-thinking, rather than sticking with old-fashioned 25 year old Photoshop concepts.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.