Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,640
  • Joined

Everything posted by lacerto

  1. Yes, I do not think that there would be much point in that for Serif. VectorStyler is a new app and can interpret this similarly as CorelDRAW, but their goal is becoming some kind of a Switch army knife; not necessarily too robust, but nevertheless often quite useful. I can fully understand if old stuff like this ls just dropped -- as mentioned, the situation as regards fonts is much different today than it was sometimes around 1990.
  2. Ok. I was reasoning that if a font utility interprets the font mapping similarly as e.g. CorelDRAW, VectorStyler and Xara Designer do (and as TransType appears to do, looking at the "N" fields, which I assume to mean a glyph name (in whatever system at all), and which seem to match the key mapping this font has in the mentioned apps that support this font, then these utilities might be able to apply Unicode names programmatically (so that the font becomes useable to at least certain extent), in a way that FontLab does. It would not be useful for anything more ambitious, but just as a kind of a rescue job. If apps other than FontLab (or some other professional font editors) cannot do this, then fine. I did not expect TransType to be able to fix this, but I was curious to see what it does.
  3. If you want leading relative to font size, you can use leading based on percentage.
  4. I checked now what TransType does (also checked for updates, none available): It can show the Glyph map and the assumed glyph names (ones that e.g. CorelDRAW, VectorStyler and other apps not depending on Unicode names use; meaning, e.g., that there is a wrong glyph at the assumed locations of Aring and aring), but unfortunately TransType does not create a Unicode name table when it converts the font (even if changing from OpenType TrueType to OpenType PostScript). [Actually whatever TransType does when it converts this specific font, it makes it unusable also for apps that could correctly use the original font.] Apparently there is very little need for these kinds of utilities now that legally free (relatively) high-quality fonts are easily available, so this tool was created pretty much just to clear naming conflicts and Windows menu style hell, and to quickly get PS fonts upgraded to some kind of an OpenType.
  5. Thanks for the additional information. I was primarily wondering if there is something similar as TransType, of which I suppose there has not been an update for a long time (I also have not used it for a long time). It would be useful if it had Unicode name update included in its features. Or perhaps it does it if you just take e.g. this kind of TTF and "convert" it to same format (or OpenType TTF)?
  6. I see. I asked this because this font still works fine in the apps that I mentioned (and also seems to be listed correctly in the Glyphs browsers of Affinity apps and Photoshop CC 2023 even if they cannot be typed from the keyboard in these apps). I assumed that this would indicate that the encoding is somehow standard and could accordingly be applied consistently also when giving programmatically Unicode names to the glyphs. Anyway, it seemed to work something like this when doing it in FontLab 8.
  7. Ok, I see. There might be some serious issues in this method. I tried it with the file above, and opening a PDF 1.4 file (WITH transparencies, so not a PDF/X-1a:2003 file, but issues would be identical) results in overprinting attributes becoming lost, as well as embedded font becoming converted to curves on export. In addition the spot color is lost, as well (similarly as when using PDFTools). You can reapply these attributes and color definitions in Xara, as well as replace the embedded font with one that is installed, but that can be a huge task. So just for making the file compatible with 1.3, PDF editing, changing just a minimum of features, would be ideal. testcolor_pressready_mod_(PDFX_Xara).pdf Do you have the Japanese tool from Antenna House mentioned earlier in this thread? It would be nice to know how it handles the test file posted above. and whether it handles the conversion as a kind of an editing task (the way PDF Tools does) rather than opening, interpreting and recreating (the way Xara does).
  8. With this kind of font simulating handwritten text, a limited character set and missing kerning (the glyphs being bound) are not necessarily a problem (if they were not in the original font). In an earlier thread generation of Unicode name table was discussed, and with this font, just running the "Generate Unicode" would make the font compatible with Affinity apps. I wonder if there is a simple low-cost utility that allows users to update fonts like these in a batch (or even just one by one; e.g., is this feature included in the Home Edition license of FontCreator). That kind of utility would not of course work fool proof but in tasks like these where a font does not actually need to be converted but just its meta data "modernized", it would probably mostly make the font useable again. On the other hand, probably even this kind of meta data update would in many cases be considered illegal use of fonts (even if not distributing them). These kinds of non-Unicode based glyphs are still supported e.g. in latest versions of CorelDRAW, VectorStyler and Xara Designer, but not e.g. in Inkscape, GIMP or Photoshop. I can no longer check if the support for these kinds of fonts was lost in Photoshop in context of losing support for PostScript Type 1 (so that this would also affect these kinds of TrueType fonts initially probably converted from PostScript fonts), or if this is more general issue related to missing Unicode support in a font, but the font works perfectly well still in Photoshop CS6. Considering subscription based software like Photoshop, a user might lose support of certain font technology overnight, if an old version of the app is removed from the computer (I think that Adobe does that eventually) and it would be well grounded to make these kinds of updates to fonts even just temporarily to rescue a job.
  9. Do you mean using it by opening a PDF/X-1a:2003 (version 1.4) file and then just exporting it as PDF/X-1a:2001? Meaning that the file already has transparencies flattened and just needs to be properly exported as a file that has PDF 1.3 version in metadata? Or does the app have a specific conversion capability? PDF Tools by Tracker Software can take e.g. the kind of a version 1.7 file from Affinity containing transparency and make it 1.3 (by using user selectable target profile), flatten the transparencies without rasterization (at least in these kinds of simple cases), but it would convert spot colors to process. I think it cannot be purchased separately but only as part of PDF-XChange so it is a bit pricier tool (but nevertheless comes with a perpetual license, and is not nagware). testcolor_pressready_mod.pdf testcolor_pressready_mod_(PDFX).pdf
  10. Yes, it definitely is. Version 1 apps never crashed this way, and when becoming non-responsive, this status always showed in Task Manager, and tasks could be terminated orderly and immediately relaunched without issues. I am not sure if it is related to having two different framework apps (UWP and WPF) of the "same" app running concurrently, or if this happens also when running version 2 apps alone.
  11. On Windows. at least, I could not see any change in behavior between the release and the latest public beta v2. Now that I tested the feature more thoroughly it has become significantly worse in version 2. Only trimmed design (of Designer document) can be placed in Publisher, and if that document has bleeds defined, the trimmed area is shown all wrong. Even placing the design as a document does not work. In version 1 Designer artboards were shown correctly in Publisher (even if bleeds were cut off), and any artboard could be used with bleeds if placed and positioned in Publisher as a document. Now it is a complete mess. I have not checked yet how this works on macOS. EDIT: I later had a test run on macOS (Ventura, M1), and release version 2 (not beta) and it is similar there, or even worse. I could not e.g. select which artboard to show of a placed Designer v2 file containing multiple artboards (the first one is always shown). Also, even Designer v2 files with bleed defined but not having any artboard at all could not be shown correctly. Beta testing is kind of nightmare at the moment. I had version 1, version 2 release and version 2 public beta running concurrently (both Designer and Publisher so 6 apps), and additionally Photo version 2 release. They all became non-responsive after one of the version 2 apps became (even if no longer shown to be non-responsive in Task Manager) simultaneously, and after having been killed, none of Affinity apps, even version 1 ones, would relaunch properly without rebooting the computer. This would certainly be the foremost bug to tackle!
  12. PDF/X-4 is fine also in Affinity apps, just be careful with it if you have any placed PDF content to be passed through that it does not get rasterized. If PDF/X-4 is not supported by the printer, but transparency flattening is not required, the most robust workflow would be using CMYK/8 document color mode with printer-recommended CMYK target profile, then export using the default "PDF (Press ready)", but unchecking "Embed ICC Profile" option, and checking "Convert Image Color Spaces". This makes the file such that all native and placed image color values are resolved, leaves transparencies unflattened. and is compatible with practically all placed PDF content. This would not require premature rasterization at non-optimal resolution but would still give you a PDF that shows colors as predictable as possible before sending the file to the printer.
  13. Certainly. But I find it very surprising hearing those kinds of figures as of availability of live transparency capable RIP. It may actually be that many print shops that say that live transparency is ok actually just use prepress software, even just Adobe Acrobat Pro, for flattening (and keep flattened colors in vector format whenever possible) rather than pass the job directly to RIP. We practically never use PDF/X-based production methods but we always use PDF 1.4 or later so leave transparencies unflattened. The recommended PDF color space here in Finland is however nearly always DeviceCMYK, no embedded ICCs, targeted to printer-recommended profile, and sometimes a paper-specific profile. Never experienced problems, not with small or big printers. Requiring flattened transparencies is rare, but based on numerous topics on these forums, it seems that this is required by many self-publishing services targeted to general public, including ones like Amazon.
  14. For information to anyone using Ghostscript to check separations and distribution of colors, it was recently found out in another thread that this process is similarly ICC dependent as Adobe Acrobat Pro, and would typically show wrong color values for any job that is ICC based (instead of DeviceCMYK) and has color profiles embedded. There might be a way to choose the matching preview profile to get correct object color values, but a complex job might have multiple color profiles embedded so the values should be read using a referred ICC profile for each object, and I do not think there is a way that Ghostscript can handle this (Adobe Acrobat cannot, either, but has a tool that lets the user check the ICC and true color values one by one). What is more important is that it seems that having an embedded CMYK profile might actually cause unnecessary translation of color values, even if the existing color values were already calculated for this color space (similarly as happens with Ghostscript), just because of the embedded profile the file would be considered ICC-based. This e.g. would result in all K100 values becoming four-color-blacks. When using Packzview, the embedded CMYK profiles are not considered and native CMYK values are passed, but it is not clear whether this is something that requires experienced and careful print personnel to take care of when ripping the file, without which these kinds of production PDFs [= kinds that Affinity apps create by default whenever choosing "PDF (Press ready)" based exports] might end up being recalculated when ripped.
  15. It may be that the so called "PDF compatibility rules" that are applied within Affinity apps (and as far as I know them only when we are talking about professional page layout apps), ignoring of which results in rasterization (and typically total ruin of placed PDF content to adhering to those rules, e.g. rasterization of embedded fonts, translation of color values, K100 turning to four-color-blacks, loss of overprint and spot color information), are PDFLib based. But I see no reason why Affinity apps could not try to preprocess the job and flatten the transparencies by using Boolean operations so that rasterization would not be needed. Or at least develop this feature in the UI so that the user could apply it where needed. One good reason probably is that Affinity apps still struggle, also in version 2, with basic Boolean operations, even if e.g. Division, that is useful in transparency flattening works now much better than in version 1 apps. Having transparency flattening as an app feature, like in Illustrator (or Adobe Acrobat Pro) would be very useful, as it would work around library based limitations and would basically allow you to produce 100% compatible DeviceCMYK PDF 1.3 compatible production PDFs that can be printed anywhere. I am not an expert in printing but I have understood that live transparencies are still out of reach of many print shops. See this related discussion and what is mentioned by David Milisock on CorelDRAW (which also cannot, even in version 2022, do transparency flattening library based, nor does it have direct/assisted support for it in the app) forums: https://community.coreldraw.com/talk/coreldraw_graphics_suite_x7/f/coreldraw-graphics-suite-feature-requests/65825/exporting-complex-transparencies-and-gradient-designs-from-coreldraw-2020-to-pdf-and-losing-gradient-colour-definition But if it is available, it is definitely advisable not to use PDF/X-based production methods because of mentioned internal issues [in situations there is placed content to be passed through]. ALL other export methods support live transparency as they are PDF 1.4+ based. PDF 1.3 [besides PDF/X-1 and PDF/X-3] is the only level that does not allow them, and as this method is not supported by Affinity (PDFLib), this causes issues that many have with PDF/X-1a:2003, as it is based on 1.4 while many preflight apps might check version 1.3 as a sign of guaranteed transparency flattening, and accordingly might discard files that are technically perfectly printable. As mentioned, Affinity apps are not alone with this problem. As far as I know only Adobe apps and QuarkXPress can handle this well.
  16. I do not think that there is a bug. The original data merge setup document had a Data Merge Layout control that had zero specified in the Record Advance field. This means that the record pointer is not moved when processing the data so that within the Data Merge Layout control the record pointer would stay at the first accessed record and stay there also when processing the Stand field, and would be advanced only after that, when the same field name is referenced next time on the next page. If the Record Advance property of the Data Merge Layout control is set to 1, it would correctly advance the record pointer when processing the rows of the Data Merge Layout control itself, resulting in 1A and 1B appearing on the rows of the control on the first page, and thereafter the record pointer being moved by one, the Stand field now being read from the third row (resulting in value of 2). The problem could be resolved e.g. by setting the value of the Record Advance field of the Data Merge Layout control to 1 AND, as mentioned, moving the Stand field text field at the top of the layer stack to the bottom of it so that it would be processed first (processing is done in Z-order of controls, which is the reverse of the object order within the Layers panel). The correct processing could also be achieved by using multiple Data Merge Layout controls (e.g., using one to process the exhibitor data and another to process the Stand field by first backing up index pointer by one). essai fusion données_fixed.afpub EDIT: I tested this both on macOS and Windows and the behavior is identical on both platforms.
  17. Affinity apps do not support vector patterns. As a workaround, you could expand a stroke into a closed shape and then fill it with a vector image by clipping it into the shape forming a pipe. Or you can work directly with a curve segment and then use the Fill Tool to apply a bitmap for the stroke, and additionally use the Appearance panel to apply a second stroke in the background to define a background color. The latter method is useful as you could continue to shape the pipe in any way you like. The bitmap fill however would rasterize the curve the bitmap fill is applied to so the pipe shape would not keep the smooth appearance that it has when it is edited. Version 2 apps should allow dragging bitmap fills for both the fill and stroke of shapes from the Assets panel, but the feature seems to be buggy at the moment (both on macOS and Windows).Dragging bitmaps directly from the file system onto the bitmap fill icon on the toolbar does work, though.
  18. I guess that it is the degree of co-operation and co-work that dictates which tools need to be used. If the idea is that your team can share common native file format for cross editing, and that file format for the rest of the team is AI (even if including PDF streams), or Illustrator EPS, or PDF that preserves Illustrator editing capabilities, you really need to have Illustrator as your tool, since AI and (Illustrator) EPS support of Affinity apps is very limited (as is support of these formats in other apps). [Even pure PDF support of Affinity apps is not complete, and even if it were, would not support many of the features that would be important for cross-editability, starting from layers.] This is pretty much the same situation as if the rest of the team worked with Affinity apps using .afpub, .afdesign and .afphoto (which are one and the same format), but one of the team members needs to have files sent as PDFs. Exchange of information would be severely limited and much of the editability lost. It is unlikely that this will improve in the future because proprietary file formats guaranteeing full editability will stay that way. Working with variable fonts however is not necessarily very common within Adobe app based team work, either, even if the most prominent Adobe apps support them well. Adobe Fonts, e.g., as far as I know, still only offers static fonts. As long as projects involve printing, use of variable fonts, IMO, is a bad idea.
  19. @kenmcd, thanks for the information. This is all pretty confusing. I have very little experience of using variable fonts and cannot fully understand what's going on in this file. I later noticed (from screenshot by @firstdefence) that there is stuff that is off the PDF MediaBox in the attached .AI file, which cannot be read even by e.g. by CS6 version of AI. Xara, too, only opened the top part of the job. On the other hand, the PDF stream part can be placed for passthrough by apps like QXP 2018 not supporting variable fonts but so that embedded fonts are passed through, and by Affinity apps that will convert text to curves when further exporting to PDF. So on the other hand, it seems that the fonts used by the design are embedded in a way that is compatible enough for passing through even by legacy apps (not supporting variable fonts), and on the other hand, that the AI part is exclusive also for legacy Adobe apps (which is to be expected). Basically extended compatibility was not the primary interest in delivery of this content, intended to be editable, so support for at least post CC2020 native proprietary format was assumed. Personally I think that advertizing AI compatibility in non-Adobe apps is a bit misleading as it seems it is mostly restricted to PDF streams when it is legal, and if there is licensed compatibility (as I assume there is in CorelDRAW), it is pretty limited. The rest (e.g. in Xara or VectorStyler) seems to be based on de-engineering and sometimes works, sometimes does not.
  20. Yes, that's a rapidly increasing problem. For many fonts static versions are available (e.g. when purchasing fonts or downloading e.g. Google fonts), but delivering as one single font such superfamilies as Acumin is understandably a convenient way to make sure all available styles of a specific font are included (especially if packaging feature is not available).
  21. Can you demo. Path effects are an interesting feature in Inkscape, and this specific effect seems to have quite versatile uses (checked a couple shown on YouTube). I could not however produce this kind of an interactive task so it would be great if you could show!
  22. No problem on macOS Ventura, either. My guess is that your problems are caused by having lost connection of those two photos that you posted here on the forum to the original source (possibly on cloud), and the errors that you experience are related to app trying to locate these files at export time. If you relink those images, you might succeed in export. I say this because if I download your package the mentioned two images show "linked" as their status but do not have proper path connected to the images.
  23. On macOS I have tested variable fonts with Affinity apps a few times and I think that they can render the glyphs fine but spacing is all wrong (they use the same spacing -- that of the Regular style -- for all sub styles). On Windows they can only render the regular instance of a variable font, and the font sub style list just shows "Regular" (repeatedly) as the style of all static instances. So even if glyph rendering is ok, the Affinity apps cannot handle variable fonts correctly. So it looks like this:
  24. Yep, I now tried it with Xara Designer Pro (19), and it opened it fine without even having the fonts installed. It is a great tool for these kinds of rescuing tasks, it could even export the file converting the embedded glyphs to curves when further exporting to PDF at least apparently without issues. EDIT: After which it could be edited (in a way) also in Affinity apps:
  25. As mentioned, the file has Acumin Variable Concept embedded. I am not sure if other than Adobe apps can handle properly embedded variable fonts (mapping them to installed fonts when opening a document). Adobe Fonts has this font also as static fonts but I am not sure if it is available as free font as static versions. The .AI file has a PDF stream so just renaming the extension to .PDF and placing it in an Affinity app would allow rasterizing the file at high-res so that it could be used (but could not of course be edited). I tried to open this in latest version of CorelDRAW [which supports both AI, to certain extent, and variable fonts] but it could not handle the encoding and open the text neither as text or as converted to curves, even if I have this font installed as variable version. CorelDRAW can handle variable fonts fine otherwise and shows the font in UI without problems. When CorelDRAW exports variable fonts into PDFs, it converts them to curves for better compatibility.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.