Jump to content
You must now use your email address to sign in [click for more info] ×

Search the Community

Showing results for 'video' in content posted in Feedback for Affinity Designer V1 on Desktop, Feedback for Affinity Photo V1 on Desktop and Older Feedback & Suggestion Posts.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Affinity Support
    • News and Information
    • Frequently Asked Questions
    • Affinity Support & Questions
    • Feedback & Suggestions
  • Learn and Share
    • Tutorials (Staff and Customer Created Tutorials)
    • Share your work
    • Resources
  • Bug Reporting
    • V2 Bugs found on macOS
    • V2 Bugs found on Windows
    • V2 Bugs found on iPad
    • Reports of Bugs in Affinity Version 1 applications
  • Beta Software Forums
    • 2.5 Beta New Features and Improvements
    • Other New Bugs and Issues in the Betas
    • Beta Software Program Members Area
    • [ARCHIVE] Reports from earlier Affinity betas

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Location


Interests


Member Title

  1. As you mentioned, in Illustrator, dragging a control handle onto the node snaps the handle to the node. If I'm not mistaken, snapping a control handle to the node basically removes the control handle. As far as I can tell, alt/opt-clicking a control handle in Affinity has the same effect that's achieved in Illustrator by snapping the control handle to the node. In the video below, the results appear to be the same: On the first shape, I dragged the control handle onto the node to simulate snapping it to the node. On the second shape, I alt-clicked the control handle to delete it. Let me know if I'm not understanding something. Delete-Control-Handle.mp4
  2. From my experience, alt/opt-clicking a control handle has the same effect that's achieved in Illustrator by dragging a control handle into a node. But I may not completely understand what you're trying to achieve. Can you provide screenshots or a video that provides a bit more detail of the end result you're looking for?
  3. Ok, so I figured I was missing something and found an Affinity video that explains the document resize dialog more clearly. It's actually simpler than the photoshop process but not intuitive if you don't realise that you can change the unit type with resample unchecked (because they are greyed out when default unit size is pixels. So, the process is still 2 step: 1. To increase dpi to 300 - Open document resize dialog, uncheck resample, change dpi to 300, and save. (if you change units to other than pixels, you can see that the dimensions ARE linked to dpi- ie, dimensions get smaller as you increase dpi - hooray! Pixel dimensions don't change (which also confused me- someone could explain it, but I just need a simple workflow!) 2. To change image dimensions (downsize only) - reopen document resize dialog, check resample, change dimensions, save. Or, just export in required dimensions. This is actually an easier workflow! Sorry, if this is simple and obvious! But anyone else who gets confused like me by the different dialog menu might find it useful. Here is the Affinity video: https://www.youtube.com/watch?v=KTmM8hw2r_M
  4. we need a way to distribute objects around a fixed or locked object, so this would make aligning freehand objects added later in the design swifter. few parameter can be added like in the video editing softwares, tracking a point but specifically for free hand moves and snaps to selct a specific point to snap on or move around or align in relation to. this would be easier than to create a whole parameter page to tell the object how to move or distribute and more precise than the available snapping system in designer.
  5. Please add the feature to import a VIDEO file to get single frames of that video as images on separate layers. = „Video Frames to Layers“ in Photoshop: „Video Frames to Layers“ File-> Import -> „Video Frames to Layers“ The Option „Limit to every _ n_ Frames“ is essential to have in this feature. This feature maybe was not sooo important for a long time, but now it has become a very interesting feature, esp. for web designers, as one can „fake“ a video-scroll effect with such an image sequence, without having the disadvantages of video on (mobile) websites. I hope, you get what I’m trying to describe here? ;) It would be very, very helpful just to KNOW, if you are planning to implement this feature. Thank you very much in advance! And thank you SO much for all the great work you do! I wish you best of luck with just everything!!! :) P.S.: http://www.muse-themes.com/products/frame-scrubber Here you can find a video (next to „Widget Highlights“) that will explain, why this feature would be so extremely nice to have… :)
  6. Hi @xicus, Welcome to Affinity Forums If you want to merge two overlapping nodes/end points (from different paths) into a single one, with both paths selected, drag a marquee around the overlapping nodes with the Node Tool and click the Join Curves button from the context toolbar (see video below). If the nodes of the paths do not overlap clicking the "Join Curves" button will connect the closest end nodes with a line (not shown in the video below). join_curves.mov
  7. Hey all I've made a video showing several features (and bug fixes) that would make AF better for game art. Hope you find it useful Cheers, Passive
  8. Am I right in thinking Affinity Photo doesn't provide a way of capturing stills from 4/8K video? VLC player does. Lightroom does. Photoshop Elements does. Seems odd... There must be a reason? Alan.
  9. Hello, This is not a question; more of an FYI/ FWIW observation of a newbie. Day 5 of my trial of Affinity Photo (AP) and I was attempting to find something equating to my PS CS6 Adobe Camera Raw (ACR) workflow in AP. Having still not found it looking on my own, I next did a search for ACR on the Serif AP website which resulted in only one link: https://affinity.serif.com/en-us/tutorials/photo/desktop/video/309301203/ This video tutorial on making a macro is going to be helpful (I thought) as I could possibly come up with some ad hoc batch approach in working with RAW files. And just as a sidebar, James does a great job, not just on this video, but all of the videos that I seen done by him - top marks. The content being presented at about 2:26 into this video, showed the user selecting the transparent area and then using the Grow/Shrink selection modifier with an entered numeric (pixel) value; however, no mention is given about the algebraic sign, or how else the user defines the shrinkage or growth. On my Windows 10 Pro machine, when (Ctrl+b) is entered after making a selection, no tool tip or other indication is given clearly indicating if we're growing or shrinking. It might help to mention somewhere through this procedure the significance of the algebraic sign; e.g., Grow(+)/Shrink(-) Selection requires signed values.
  10. Hi, AP already has some great tools for making seamless textures (Affine Destort, Frequency Separation, Inpainting, etc..) There's a great official tutorial video on it. The end result is something like this: Although this is a properly seamless texture, it has some low frequency features that means the tiling/repeating will show up. One thing that would really help would be to having a tiling display mode so that we could preview the tiling effect, like this: (note: I boosted the contrast here so its easier too see) As an added bonus, it would be super cool if we could paint in this mode and have the painting automatically tile/repeat. On a related note, being able to rotate the canvas view would also be very handy, but I think this has already been requested. Thanks, Passive
  11. Hi, @Lorox. I noticed your post a few days ago. I also watched the Texturelabs video, and enjoyed it. I have posted a macro in the Resources section (see below) which duplicates this effect fairly convincingly. You might want to take a look.
  12. Hello, I came across this video when researching digital colour mixing. Im not sure if Affinity has implemented this kind of tech in their apps. Anyway it lead me to this feed so I thought I would throw my hat into the ring and express my interest in this . Was interesting none the less.
  13. This video has wrong caption in Japanese. https://affinity.serif.com/ja-jp/tutorials/designer/desktop/video/301822426/ The caption seems to be the one from this. https://affinity.serif.com/ja-jp/tutorials/designer/desktop/video/301821541/
  14. I found this YouTube video where someone (I don't think from Adobe) created his own script that works with Adobe illustrator to an import SVG files *while retaining their layer structuring.* Because what seems to be a random person who is able to do this with illustrator, I know that the Serif team can do it in Affinity Designer. https://youtu.be/fbOTRjbJtc8 I love Affinity Designer and it's workflow and I know that it will be faster for me to work in Affinity Designer to get the designs I want than in Moho Pro. If I cannot import SVG files like this, it will be too tedious to import complex shapes like characters that have many layers. I would therefore just use Moho Pro to design characters, meaning I would have to set Affinity designer completely to the side, which I don't want to do. I spent money to purchase Affinity expecting I was going to use it, so it would be a shame for me not to use that software because of something simple like this. Can someone please make this a feature asap? LOL
  15. Hello, I love painting digitaly. AffinityPhoto is AWESOME alternative to photoshop for us - digital artists. The only feature we need now is built-in screen recording/stroke recording. There are tons of artists heading towards ipad apps (procreate/artstudio pro) JUST for that feature. You can say, that we can use 3rd party screen capture program but this is simply not the case. In procreate there is STROKE recorording - that means, if you took 30 min break during the painting - it won't be recorded. In procreate 10 hours of painting is compressed to like 10 mins because of that. The process of painting and posting video on yt/ig is so much easier because of that. Painting in affinity photo means that i have to record 10 hours of video, go through this in other program and cut out all the breaks from it. Also i have to be sure i didnt record anything that i didnt want to be recorded. Also, i think its THE BEST advertisement you can get. Procreate got popular, because people were posting their artworks with timelapses on youtube/instagram and others were curious what was the app. Beside that, this would be THE FIRST proffesional app for desktop with this kind of feature. It would be crazy easy to advertise that in artist community.
  16. I just purchased Affinity Photo. I have an occasional use for a still photo viewer that is fast and shows large images (I use a lot of astrophysical images). But for every still photo I might open, I work with data from thousands of videos. These are high resolution and now extremely high frame rates (millions of small regions of interest per second). I was hoping that Photo would have some rudimentary ability to open MP4 (a collection of images), FITS (collection of data including images), HDF5 (same) and many others. With registration and stacking and image processing algorithms, the video capable "cameras" are not really different than "wireless webcam", "POE security cameras", "internet data streams", "screen recordings", "many millions of lab instruments and images sensors", and more. (For the Internet Foundation, I review all sensor communities on the Internet. The image sensors, including many 3D sensors and technologies, are vast and pervasive. But I am getting a handle on the whole). So if Photo wants to stay in the groove, it really really ought to take data from any sensor. That can include all simple time series or groups of timeseries, since the visualizations are implicitly 2D, 3D, simulations. (would someone tell the programmer who wrote this editor to turn off the reCAPTCA after I click it. Extend the ___ timeout. I cannot write in a minute or less. If it changes while I am typing, it locks up and you have to reload the page, reenter the title and email and waste time. Richard Collins, Director, The Internet Foundation
  17. Suggestion: Interpolate snapshots or history-states to as video eg. select "interpolate" > enter nr. of frames. AF writes out an (uncompressed) video file interpolating the settings between snapshot A and B... greets s. optional: more states A, B, C, D ...etc.
  18. Blender is a great example of a program doing a terrible thing. It is ignoring all OS conventions and rolling its own user interface, making it inconsistent with EVERY operating system it runs on. The only legitimate reason it could possibly provide for that is to ensure neutral grays for more accurate color judgement. That is the only reason that I would give it credit for, as most operating systems are lacking in providing for this requirement. I consider that a flaw of the operating systems which should be addressed so that applications with critical color judgement requirements can inform the OS as part of an application manifest or an API call and the OS would provide an appropriate neutral gray appearance which is otherwise consistent with the rest of the environment. Other reasoning I have encountered is generally misguided. In particular, many developers cite a desire to keep the application consistent between operating systems. The problem is that a user of a computer is likely to use multiple applications, and it is more important that they be consistent with each other than that they consistent across operating systems - someone sitting at one computer and trying to use four different applications will find things that work four different ways and trying to juggle them while switching back and forth is not a good thing. Don't get me wrong, I have blender installed all over the place and use it from time to time myself - it is a great program in terms of the functionality it offers - but the situation with the user interface is something that should not be emulated, except by video games and other immersive environments. The use of configurable panels to lay out controls appropriately for the task is definitely a good thing. This is one area where Apple is making a misguided recommendation to avoid this. It makes some degree of sense for consumer-level applications to limit options to some degree, as more casual users may easily get lost wondering where something disappeared to when the visibility and positions of panels are easily changed, but for many professional applications they are largely a requirement. This does not provide an excuse for the controls placed on those panels to defy OS conventions, including scaling. Other than the neutral gray issue, there is nothing that would prevent normal OS-provided controls from working in place of the highly custom ones Blender provides. It would be better to design the app in such a way that this is not necessary, but if it is going to provide a global scaling feature, that is certainly the least problematic way to do it. Yes, that is how things are. It is NOT how things should be. I think we are arguing two sides of a coin: I am indicating how I believe things should be, you are anchored in the messy situation of the unfortunate way things are (but should not be). Different UI frameworks would be fine as long as they all ultimately followed the conventions established by the underlying OS rather than bypassing them.
  19. I'm a hobby photographer since 15 years and have moved from photoshop to Affinity photo. While I do understand raw fairly well I have never fully deep down to the core understood which changes I want to do in raw development and which are equally fine to do in the normal photo persona. So I would suggest that you make a youtube video with the excellent James Ritson making a deep dive and getting kind of technical with which things we should adjust in develop persona (with raw) and why. And which things can be done equally well in the standard persona. Mainly I am after maximum quality, so I would like to know specifically which things I should do in develop persona.
  20. Dear Affinity Team Developers, The eraser/brush on linear/gradient overlay is one of the most used tools in lightroom by major artists. We miss this feature badly in affinity raw edit. Currently "Overlay erase" only work on brush overlay. It is not working on gradient/linear overlay. I wish the overlay area of linear/elliptical/gradient overlay can be increased or decreased using brush and eraser. Look at the below video. Can the similar thing be done in Affinity raw editor? This will reduce lots of layer works in affinity. I am sure this will drag many of artists to affinity from lightroom.
  21. No, you should change your entire system for your entire system. That way everything is consistent, if the apps are responding the way they should be to the scale. If an app provides an option to adjust its overall scale independently of the OS setting, it is in effect giving the user the ability to make things inconsistent, and that by its nature is a misfeature. Providing an option to adjust (for example) toolbar size relative to the OS scale makes sense and would be reasonable, but to scale the overall interface differently from the OS setting is a bad idea. Apps which provide that option rather than following the OS setting, or which do not follow the OS setting correctly, need to be fixed. The only possible exception would be something that provides an immersive experience, like a video game, which presents an interface unique to that experience which is by its very nature separated from the rest of the system. In this case the whole point of the application is to let the user become lost in an imaginary world, and it makes sense that the interactions would be tied to that world instead of the real one. For applications that are grounded in reality, however, it is more important to be consistent with other applications which are similarly grounded in reality. This includes productivity and creativity apps such as those from Serif.
  22. Today I was thinking about a totally non-essential function, but a very nice-to-have that seems very possible to me. I think it would be great if we where able to make time-lapse making-of video's of our work directly in Affinity. Since there is an unlimited history, it should be possible to write a script that returns one step and writes the image to a folder for every step in the history-list. Then it's only a matter of stitching these images together in reverse-order as a .mov and there you have your timelapse-movie. The second part can even be done in Automator as long as we would be able to generate the png-sequence. See: http://apple.stackexchange.com/questions/70224/automating-quicktime-image-sequence-creation-in-mountain-lion I know this is not an essential task at all, but I think it would be great to be able to show the world how we do things in Affinity. Having this function could generate considerably more buzz online. Anyway, I hope you like the idea!
  23. Hello to all! Crazy idea … Is it possible to #save #history as a #video while #editing or #creating anything in your #amazing #software ?! Please, make it possible! Thanks! #AffinityPhoto #AffinityDesigner #AffinityPublisher http://bobarev.com
  24. Will Affinity photo be creating an upgrade to import or open Panasonic 4K Photo Frames to them be focus stacked
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.