Jump to content
You must now use your email address to sign in [click for more info] ×

Tim France

Staff
  • Posts

    47
  • Joined

  • Last visited

Everything posted by Tim France

  1. Hi @Seneca, I don't really have anything specific to say other than the team is still smashing out APIs! We've also had to go back and do some of the less interesting "that can wait" tasks, such as dealing with shut down properly e.g. you start running a script and then decide to shut down the app halfway through its execution - we need to make sure things like asynchronous ops are properly aborted / synchronous waits end gracefully. We've also been re-evaluating the high level JS layers to make sure they were intuitive and usable. We went hell for leather to get bits of the app exposed, but didn't do them in a particularly good way e.g. this kind of sucks: let clrData = new RGBA8(0, 0, 255); let clr = new Colour(clrData); and should be something much more concise like: let clr = RGB(0, 0, 255);
  2. Hi @kimtorch, Could you clarify on what you mean? Are you referring to parsing some text with tags in them or do you mean tagging text frames / individual bits of text with a tag and then have a script use those tags to find and manipulate bits of text? I'm assuming the latter because the former doesn't require app support - it would just be a case of writing a script to parse the text. If you mean tagging bits of the DOM (like text frames) or individual glyphs within a story, that is something we're already considering. Could you describe a standard use case?
  3. This. The scripting devs are going to have to dedicate a significant amount of time towards documentation. We can't expect our docs team to document the ins and outs of an API (technically multiple APIs), it's simply not fair or realistic. Besides, we'd have to tell them what to write, which would mean pretty much writing the documentation anyway. Sure, they'll be able to present it in a way that looks good and integrated with the normal app documentation, but the devs are going to have to provide much of the content. We're planning to use one of the many available documentation tools to do most of actual generation for us. The current favourite is Doxygen, largely because most of us have had at least some exposure to it. Please remember too that the scripting team sometimes has to do bits of work away from scripting development. As the dev who wrote the DWG/DXF importer (and now exporter - see here 🙂), I tend to be the one tasked with its fixes and improvements. The same goes for Move and Shape Data Entry. Everyone in dev could put in those fixes, but it makes so much more sense if I do them because I'm most familiar with the code and should be able to do the work faster. The members of the scripting team do spend most of their time doing scripting work, but it's not 100%.
  4. Hi @Bruno Jolie, I've taken a look at your file and can confirm it is a bug in Affinity. The problem is the file contains a single Polyface Mesh, an entity that Affinity does not support. Once we've skipped over that during import, there is nothing left in the document and due to a particular set of circumstances, things go wrong. I've fixed the crash and while I was in that area, I added support for polyface meshes. The changes will appear in a future version of Affinity (likely to be 2.4) Tim
  5. Hi all, The team has been making good progress. I don't have any updates on a release date but please be assured we are not sitting on our haunches - we want to get this feature out as much as you want it out! Naturally we've been exposing more of the apps' functionality to scripts, but we've been working on plugin-specific technology too. For example, there's a new asynchronous file i/o and networking API, initially driven by the Javascript layer but then we thought it would be good for the lower level C/C++ plugins to have access too. Obviously with local and remote i/o, we've had to be careful that a script isn't covertly sending user data somewhere, so we've introduced a permissions system for Javascript plugins - unless you allow a particular script network access, it won't be able to use the networking API. It may not be a big shiny WOW! feature, but it's important to get these things right. Anecdotally, I can tell you we've actually used some scripts internally to do some genuinely useful stuff that would have taken literally days to do manually. One script I wrote optimised a document and removed about 60000 layers. There have also been relatively simple layout and alignment tasks that scripts can munch through in the blink of an eye. Last week I wrote a script that split a pixel layer into new pixel layers containing the blocks of grouped pixels. Even the pixel processing was done in the script - I didn't have to rely on the app to do the heavy lifting for me because our performance is good enough to implement DBSCAN in Javascript. Please be patient. We know you all want scripting available yesterday, but we're getting there!
  6. You could perhaps argue a slider would work for the angle and rotation values but the others are unbounded, so it's hard to choose a sensible range for the slider.
  7. If I hack a very crude filter into Designer that simply removes anything that is a 4E7 value away from (0,0), your file is imported like this:
  8. Hi @julwest78, I've taken a closer look and it's a tough problem to solve. The trouble is your model has entities in it that are spread across a huge distance. Open the file in AutoCAD, then go View -> Zoom -> All, then hover over the middle of your model view. Take a look at the coordinates in the status bar at the bottom of AutoCAD: If you go to the bottom left corner, the coordinate read out is (-2.5E81, -2.1E81) whereas the top right corner is (2.8E81, 1.7E81). If you're not familiar with scientific notation, 1E81 means 1 with 81 zeroes after it. To fit all of those onto a sensibly sized spread, Designer has to scale the entities down by a huge scale factor. AutoCAD doesn't have this problem because the model is boundless space but Designer has to specify a page size and you can't specify a page that is 1E80mm across - that's greater than the size of the observable universe! Some entities are positioned normally within a relatively normal range (about 1E6). Unfortunately even those start to break down when you have to multiply them by such extreme scale factors, so I don't think there's much Designer can do with this document without it being edited first Ideally you would select the entities you want to keep and then invert the selection, but as far as I'm aware, AutoCAD doesn't provide a "Select Inverse" option. I'll have a think and see if there's anything easy I can add but if you can get the document cleaned up somehow I think you'll have much more success.
  9. I see - this is a different file, but exhibiting the same problems. I will take a look.
  10. @julwest78 Are you selecting "Model" in the import options? The paper-space layout ("Presentation1") is empty, as shown in AutoCAD: However if you select Model, Designer imports this: The warning about the proxy entity is still valid - AutoCAD essentially gives you the same warning - but it should not affect the import.
  11. Thanks for spotting that. The behaviour on Windows is a little different to on macOS, the latter being correct. Changing the tool should dismiss the dialog and hitting enter with the Node Tool selected doesn't bring up the dialog if you have anything with curve nodes selected (there's a good reason for this...). The Windows version will be fixed in the next beta.
  12. @julwest78 I have good news and some not-so-good news. The good news is I've found the reason for the bad import and have put a fix into Designer. Now your file will open normally and look like this: The not-so-good news is it is highly unlikely I'll be able to get the fix into v2.2 as we're quite far through the beta cycle, so unfortunately this will probably have to wait till v2.3. Regards, Tim
  13. Hi @julwest78, I've taken a look at your file. There is a proxy entity in it, which designer warns you about when it opens the file, but I don't think they are the cause of your problems. If you choose "Single Page" on import and select the only paperspace layout available (called "Presentation1") you get a blank document. This is correct behaviour and AutoCAD does the same thing: If you choose "Model" on import, Designer imports the document but has had to zoom out by such a huge amount, you can't see anything. That's because Designer thinks some of the entities in the model are a huge distance from the rest of them and to display all of them, it zooms out. If you do a Select All (Cmd+A or Ctrl+A), you will see a bounding box around all of the entities and that box is massive. If you zoom in on the corner of the box that's on the spread and enable Hairline View mode (View -> View Mode -> Hairline), you will see your entities appear: You could select just these entities and paste them into a new document, then scale / rotate them as a work around. I will see if there's a legitimate reason why Designer thinks there are these huge entities in your document (like in Layer ERLBT00L_ElectriciteBT). Regards, Tim
  14. Sorry, for a few reasons that's not something I can get into at this stage. Similar to StudioLink, Photo-specific scripting functionality will be available from any product as long as you have a Photo license enabled.
  15. Couldn't agree more, that's one of the reasons we've put these examples up. However I should re-emphasise that the examples don't represent everything that's possible with scripting, even at this early stage. It's ok to find the examples underwhelming: Note: "just to demonstrate" and "basic" But like I said, constructive feedback is welcomed, just don't worry that this is all you're going to get. This next bit is mainly for any devs who might be reading this. The Javascript API is built entirely on top of the C API. Actually, it's built on top of a header-only C++ SDK that minimally wraps the C API so we don't have to worry about explicitly deallocating any handles that might be used. We've not cheated and called directly into the Affinity backend from the Javascript layer, it's all going through the C++ wrappers and therefore through the public C API functions. This is great for testing, as tests written in Javascript exercise all abstraction layers. We will likely release the C++ headers along with the C headers. There is a low-level Javascript API that is as similar to the C API as possible, which has the benefit of reducing the amount of documentation we have to write. There are some differences between function signatures, but essentially they are the same, with each C function "projected" into Javascript land. We then provide a set of JS files that wrap those exposed JS functions into ES6 classes and hide a lot of the boring boiler-plate that no-one cares about. For maximum performance, you'd use the C or C++ layer. For convenience, you'd use the high level JS layer, which is what the examples use. For Javascript consumers who want to design a different OOP layer, they are free to require() the API functions directly and wrap them up however they like. I would say that as a dev coming from a mainly C/C++ background, I was surprised at how performant the Javascript levels have turned out to be.
  16. Yes, there will be some notion of a plugin / add-on package and they won't have to be single-execution-and-stop scripts like in these examples. I'm not in a position to promise anything (even with the bribe of a cuppa) but we have done some work on panels because it feels like having some built-in support for UI would be of value.
  17. Yeah, I appreciate the gifs aren't great, so that's why I added the mov attachments so you could take a look at the code. We do have an fs module and it's loosely based on node's, but I feel I should reiterate the disclaimer that none of this is guaranteed to stay in its current form. If you look at emoji.mov, you'll see let table = fs.readFileSync('/Users/tim/Documents/emoji-translations.csv') .toString('utf8') .split('\n') .map(line => line.split(',')) .filter(arr => arr.length == 2); We've got the callback version of read, write etc. too. We aren't going to be node, nor are we going to try and be a web browser, but we will likely draw inspiration from both.
  18. Nothing is off the table, but we're only just getting started. All 3 of those are things the team has discussed at some point.
  19. @Old Bruce like this? I'm only showing a small amount of what we've done And like @Patrick Connor says, there's tons more to do. rename selection.mov
  20. Hi folks, I thought I'd give you a quick progress update and let you see some of things we've been working on at Serif Labs. We now have a scripting core that we're reasonably happy with and have put together a little test area where we can run Javascript code, so I thought I'd show you some of that. Things to note: This is all VERY early, like pre-pre-pre-alpha. The JS API is extremely fluid and is constantly changing. This is not how the Affinity suite will run scripts and plugins; it is just a sandbox window for internal testing. There is still a huge amount of work to be done. I can't provide any timescales for when scripting will be publicly released, for Beta or Retail. Constructive feedback is welcomed, but I won't be able to answer all questions - see previous points. I'm not claiming any of these examples are particularly useful on their own; they're just to demonstrate some basic scripting functionality. Re Javascript async - yes, we support it, I've just not recorded anything yet. The sleep calls are there for screen-recording purposes only. Here are some demos as low-res gifs. If you want to take a closer look, movs are attached. Create a Mandelbrot image Some dodgy physics Insert a dragon curve "Emojification" (translation?) of text Create a grid of colours Select and hide based on hue Replay a document's edit history mandelbrot.mov balls.mov dragon.mov grid.mov hue.mov replay.mov emoji.mov
  21. Hi @joe_l, I'd suggest it is a simplification rather than an enhancement. As well as preventing the creation of HUGE documents, the changes remove ambiguity and mean the "correct" thing is done all the time - this is what your drawing actually looks like. Because many CAD packages don't render lines on-screen with their specified weights, it's very easy to forget to update them to something appropriate for your drawing. The default line weight is 0.25mm so if your drawing's dimensions are sub-millimetre, you should really change your line weights to something more appropriate, but we understand that isn't always a priority when making a drawing. We added the hairline view mode specifically for dealing with drawings which have inappropriate line weights and to make Affinity feel more like Autocad. The ambiguity I was referring to is the "Fit Model to Page" option would apply an arbitrary scale to your drawing. Most of the time, the page size isn't specified for model layouts in DXF files, so the software would have to default to something. While that may look nice on screen, you'd get nonsense results from things like the measure tool, or even worse, you'd get results that are almost correct, but not quite.
  22. Hi @woefi, We made the changes to the import options because it was possible to specify combinations that would ask the app to do crazy things (e.g. make quadrillion pixel documents). In your case, I'd suggest either: 1. Turning on the hairline view mode, which will render lines without their weights, just as Autocad does. This is likely the preferable option as it means things like the measure and area tools will give you correct answers. 2. Overriding the insertion units to something larger. You could correct the measurements with a custom drawing scale after import if necessary. Hope that helps. Tim
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.