Jump to content
You must now use your email address to sign in [click for more info] ×

fde101

Members
  • Posts

    4,951
  • Joined

  • Last visited

Everything posted by fde101

  1. A bit lost here, if you are looking for the option to switch to using the standard Mac keyboard shortcuts for navigation and the like it is in the preferences of the Affinity applications, under the Keyboard Shortcuts area.
  2. Hiding options within what essentially amounts to a list box when there is no technical advantage to doing so does not strike me as being a sensible idea. I am firmly opposed to this one as it is suggested above - we would need an extra click to find those other options and would gain absolutely nothing from this. If the concern is that the user is having trouble pinpointing one preset among the many, I would suggest instead adding a "favorites" feature so that specific templates could be flagged as favorites and show up in a separate "Favorites" tab - that tab might be broken down by the categories from the other pages and show only the favorite items underneath each one.
  3. Based on a YouTube video I found demonstrating Create Booklet 2, it looks like each of the programs has capabilities the other does not. I haven't tried Create Booklet 2 (yet?) so can't really comment on it beyond that at this point. FitPlot has a free demo on the company's web site, but I don't see one for Create Booklet 2.
  4. FitPlot ($15 on the app store) can do this, but the interface takes some getting used to.
  5. You seem to be assuming yours is the only profession that uses high-end software? For example, the density values used to calculate physical dimensions are meaningless when the person using the software is doing video/cinema work and nearly as meaningless for most web development. I would not discount either of those as being somehow unprofessional. I didn't. I said it wasn't universally useful - there is a difference. It is not useful if the images are being transferred into software that will ignore the information anyway. It is not useful if the user is putting it into software where it will be scaled to fit some container or manually positioned/scaled such that the information is going to be overridden regardless. And this is a fair request - again, I never said it wasn't. I am pointing out that there are limitations to such a feature that should somehow be made clear to the user. At a minimum, this would need to be coupled with an option to set the density in order to make it a bit more clear what it was doing, since the underlying raster image is *ALWAYS* physically stored in pixels and those metrics on their own could be more confusing than helpful. The pixel size should ideally be presented as well to make it a bit more obvious what is being stored behind the scenes, as the physical dimensions are meaningless unless the software the image is being imported into accepts them. It is also questionable whether or not this should be available for formats such as GIF, TGA and EXR which don't support tracking of the density, as the physical dimensions would not carry forward to other software regardless. While the pixel size could be calculated from those dimensions and stored, I would be concerned that people would then start complaining that the dimensions did not carry forward to their other software like they did with the other formats, and we would get a whole new topic repeatedly started on this mess.
  6. This is true, after the selection is made you can drag it freely. This functionality is referencing a shortcut which is available when creating shapes that does not seem to have been applied consistently to the creation of pixel selections. For shapes, this is used while initially creating the shape - before letting go of the mouse while dragging out the shape to begin with - to allow you to adjust the position while still creating it. Yes, you can move and resize it after the fact, but this may be a bit quicker sometimes.
  7. And what exactly would "removing" the style accomplish? The objects are not linked to the styles - they are in effect "removed" the instant they are applied. The individual properties of the styles are copied to the object, wiping out whatever was there before. If you "remove" the style, what would those properties be reverted to? Would you simply set all of them to default values?
  8. Yes, and right next to that is a density (DPI) setting, allowing the pixel size to be calculated from the units you entered. If that program uses the optional capability to store that setting as metadata within the file, and the receiving program similarly reads that metadata, then it can reverse the calculation and determine the physical size from the pixel size and density. That can work well if all programs involved correctly support that optional feature of the file formats, but not all of them do (the Affinity programs being one example of programs that currently do not; I'm quite sure I can find others rather quickly if I start digging for them - video editors for example are unlikely to care about or use such metadata). Also, most of those formats do not store what units were selected, so if you were to export it in inches and the program you open it in is configured for centimeters, it will probably show the size in centimeters when opening, not in the inches you used while exporting, though that should hopefully be much less of a concern. Behind the scenes, the raster formats are ALWAYS measured in pixels internally, that is the "true" measure of their size regardless of how the programs handle the metadata for providing a default scale when opening them. GIF, TGA and EXR, from what I can tell, don't even support pixel density as an option, so the receiving program would need to default to or guess at the pixel density unless you manually provided it for that capability to work - otherwise you could only get the size back in pixels with those formats, or with any image saved without density information. Again, there is room for improvement here, I'm not arguing that point - what you are asking for is reasonable, but it is not universally useful nor is it implemented across the board by some of the types of software that would be opening these images, including "professional" applications of various kinds.
  9. Based on the picture in the first post, the 7 x 10.5 area was meant to include the bleed. In the Affinity apps, the bleed exists outside the defined page area, so you should be setting the page size in your document to the "trim area" of 6.75 x 10.25. The template will then need to be placed such that it extends into the bleed area, which will exceed the page size slightly. Note that the template does not appear to have the bleed as being the same size as the margins, so it will not extend all the way to the edge of the bleed area - you would need to scale and place it so that the "trim line" fell precisely on the page boundaries. I'm not sure why you would need that template though if you are using the bleed and margin features already present in the application.
  10. I hadn't noticed before that you could do that with the space bar while creating shapes. Very nice, and I agree that it would be nice to have that added to the pixel selection tools also.
  11. The fun thing is that in a new document the move tool isn't very useful either unless you start by creating guides. For all other use cases you need to switch to the intended tool anyway, so it shouldn't really make much difference which one comes up by default - it will be the wrong one 99% of the time in all three apps.
  12. I think what you are asking for is the ability to create a shape from the selection, so that a stroke can be applied? I believe that has been requested before and is something that was being looked into, it probably just hasn't been a high priority yet. In the meantime, depending on what you actually need to accomplish, you could try using Select -> Outline to convert the selection to a selection of the area to be stroked, then Layer -> New Fill Layer to create a layer from that selection which a color can be applied to using the Color panel. This might not be exactly what you are asking for, but depending on what you are trying to do, it might be a workable solution in the interim.
  13. The JPEG file format, like most raster formats, has no concept of different units of measure - they are purely in pixels internally, though some raster formats will track a DPI value (pixel scale) which is otherwise unused, typically just an FYI for any software that happens to pay any attention to it. With a raster format that tracked pixel density, if you were to set the size in millimeters, that would be problematic as the question would arise as to whether you were trying to adjust the number of pixels, or simply their density (via the DPI setting), and that would leave the software clueless as to how many actual pixels to export the image at. For this to work, you would need to specify both the size AND the density so that the software could calculate the number of pixels, and some software may very well ignore the density setting anyway which makes that of questionable value in the general case. That being said, in a "pure" JPEG file there is no way to store pixel density, so it would not transfer to other software anyway. There is an extension to JPEG, called JFIF, which allows this information to be stored, but since there is no mechanism to adjust the pixel density within the Export settings in Affinity Photo, I would need to assume that Photo is not supporting this extension upon export. Even if it were, the software reading the JPEG would also need to support that extension and make use of the information for it to have any value. The GIF, TGA and EXR formats don't support storage of pixel density at all. PNG and TIFF make it optional much as JPEG does. From what I can tell, the Affinity products don't currently support pixel density settings for any of the supported raster file formats, only for raster data embedded in SVG and the various supported document formats (PDF, etc.). There is room for improvement here, but this information is likely to be ignored by a lot of other software anyway, so even if implemented you would need to be careful about relying on it.
  14. From one perspective the hand tool being in a different place from the other apps is a curious inconsistency. But then, the other apps deal primarily with vector objects and the move tool would be a primary tool of choice for them and thus a good default. As the primary intent with Photo is to work with pixel layers in a manner which makes the move tool questionable for a typical document, it doesn't really make much sense to default to that one either. Perhaps the marquee tool should be at the top?
  15. This has been discussed on several other threads; it already works on the Mac, either on a trackpad or on a Wacom tablet. Indications are that the Serif team is planning on bringing similar capabilities to the inferior platform at some point, but that it is generally harder to do on that platform which is why it is taking longer to get that in place. I would assume that when they get the rest of it implemented for Windows they will probably do so for touch screens also, but that is just my guess.
  16. This piece of the puzzle sounds like it might belong in a bug report rather than a feature request?
  17. This can be done with complex dedicated software, which is most likely how the print company would be handling it.
  18. I believe this should be a feature request (to have Publisher recognize and discard the undesired text from the clipboard) rather than a bug report.
  19. The original ASCII standard dates back to a time when some computers had a 7-bit byte size; it wasn't as "standardized" as the 8-bit bytes (octets) that are now ubiquitous. This gave a range of 128 code points that could be used to contain the character set. With a desire to have one code point map to one character, an entire set of basic characters was squeezed into these 128 positions. With most computers eventually using 8-bit bytes (the mainframes that ran MULITCS used 9-bit bytes, but those aren't really around any more either), there were another 128 code points available. People in different regions of the world would use different mappings for these "extra" 128 positions (128 - 255, or hexadecimal 0x80 - 0xFF), which became known as "code pages", while the first 128 (0 - 127, or hexadecimal 0x00 - 0x7F) would stick with the basic ASCII set for compatibility. One of the most common code pages was the "Latin 1" set. One of the problems with this arrangement was that with different people in different countries using different code sets, the same code would map to different characters for different people, so transferring files around the globe would be a bit of a mess. To help resolve this (and other problems with that arrangement), the Unicode standard needed a lot more code points. To maintain compatibility, the "Basic Latin" block of code points at 0x0000-0x007F maps exactly to the original ASCII character set. This means that the basic characters (such as the +, -, * and / characters) are in that block, and the second block is mapped to the characters from the "Latin-1" code set, again for compatibility with one of the more common arrangements. Additional characters become code blocks beyond those first two. Because of the history of how these code blocks were developed, they probably are not the most logical delineation for a user-facing glyph browser.
  20. Ho @resultsphotography, welcome to the forums! You are in the "Photo Beta on Mac" section of the forum. For the Windows version betas, you should be looking here: https://forum.affinity.serif.com/index.php?/forum/34-photo-beta-on-windows/
  21. Recreate the document? Once the document is saved in the beta it won't open in the (older) released version any longer.
  22. I don't think I have ever seen anything that compares how effectively iPadOS uses the iPad hardware with how effectively Android uses its hardware. I would guess as well that iPadOS would be less demanding largely due to the fact that it is not encumbered by Java technology the way that Android is, but that would simply be a guess. I have seen a similar level of stability issues between the two platforms, but that is for my own use cases on the specific devices I own. Do you have any research data showing the difference in stability between the platforms? iPad has better 3D graphics support than android did up until recently. With the addition of Vulkan to recent Android versions I would expect that from an architectural perspective between the platforms they are likely on par now, but as Android hardware varies so wildly the quality of drivers will come into play, plus as has been pointed out, iPad does have the superior hardware at this point. For 2D graphics I would guess they are closer to matching, but it would be a guess on my part. As to being more "safe" and the relative lack of piracy, that is a trade-off: the closed ecosystem of the app store on the Apple platforms makes it easier to control things to the point that fewer risks are taken, but the downside is that they reject a number of categories of applications that are readily available on Android, making it worthwhile for some of us to maintain both platforms as we can do some things on each that can't be done on the other (at least not easily). This includes things such as some types of network diagnostic tools that will run on Android platforms but which require a level of network access that Apple won't permit under iOS/iPadOS. In this context perhaps, but this is not always generically true. Some software (including operating system software) can be provably superior when compared to others for specific use cases depending on how their algorithms handle various conditions. As a simplified example, consider memory allocation. When an operating system has X amount of memory and needs to distribute it across multiple processes (applications), there are a number of approaches that could be taken. If a process comes in and says "I need X amount of memory" then the OS needs to find a block of memory to hand to the process. Depending on how the data structures are organized by the operating system, the amount of time it takes to find that block of memory may or may not be predictable, may or may not have an upper bound on it, etc. It is also possible that the OS might need to allocate more than the amount of memory requested in order to keep the process of allocating the memory efficient, depending on its goals. For example, one algorithm might return the exact amount of memory requested if it is available but take an unpredictable amount of time to obtain it, say ranging from 1 ms to 30 ms but normally taking 3 ms or less. Another might return the amount of memory requested rounded up to the nearest 1 KB but do it in 5 ms every time. If you are creating a word processor, the "average" performance of 3 ms or less combined with the fact that the first algorithm does not waste as much memory would probably make it a more practical choice. If you are creating a medical device that patients rely on to keep them alive and it needs to perform some task every 40 ms precisely without fail, the unpredictable amount of time to allocate memory in that same first algorithm could result in someone's death. Again, this is admittedly a simplified and not particularly relevant example, but these types of trade-offs are all over the place when it comes to the design of a complex piece of software such as an operating system. Sure taste can come into play to some degree, but there are times when a particular piece of software (operating system or otherwise) is simply not suitable for a particular task, or perhaps is only available to run on hardware which is not suitable for a particular task, and there are times when the choice of algorithms and the set of facilities exposed by a piece of software are simply better matched to solving a particular problem.
  23. My guess is that the "big ticket" items are in play by now though we may see a few smaller things sneak in before it is final... but that is just a guess. If nothing else I suspect they may still add a few more tests to the preflight panel and support for conversion of a few additional things from IDML.
  24. Craft room and its ilk have been discontinued. I have a Gypsy I used with my old Cricut that basically died on me before I replaced it with a Maker. Design Space is overall better to work with than the old solutions, though I'm not fond of any of these relying so heavily on the outside world - if Cricut ever discontinues Design Space the way they discontinued Craft Room then there is no fallback for the machine to continue being usable. That to me is a Design Flaw.
  25. Not at this time. There has been some discussion about the addition of a plugin API as well as scripting support, but there have been no indications of if or when this will actually be introduced, though from some of the replies that have been posted, the developers do seem to be open to the idea in general. I am not as familiar with how the Silhouette works, as I have a Cricut, but with Cricut Design Space it is possible to import an SVG. Users (including myself) have experienced issues with the SVG files produced by AD and a workaround seems to be to export a PDF from AD, import that PDF into Inkscape, then export the SVG from Inkscape to load into Design Space. I have found that this generally works. Does the Silhouette software support SVG import? That might be your work-around for the time being if it does.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.