Exporting large amount of assets

So it's been awhile since I've used illustrator and I'm currently using CS5.
I'm creating UI sets for different applications. I have several button states -nuetral, hover, pressed- I am trying to export these as png's using artboards. However when I do this the artboards rename them button-01, button-02, button-03 etc. This is very frustrating as I will have to go back and manually rename all these buttons. Is their a better solution or fix for this?
Any advice as welcome! Thanks

Theee states -- neutral, hover, and pressed (where is "disabled", to complete the Fearsome Four?) -- doesn't sound lie a lot. Or do you have lots of buttons, all in one single file?
It's perfectly possible to script this, anyway, except for the small but important question of how a script would know which artboard corresponds to which button/state combination. Can artboards be named?

Similar Messages

  • Error when exporting large amount of data to Excel from Apex4

    Hi,
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.
    Thanks
    Sue

    Hi,
    >
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    >
    How? Application Builder > Data Workshop? Apex Page Process? (Packaged) procedure?
    >
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    >
    Have you changed your webserver in the process? Say moved from OHS to ApexListener?
    >
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.

  • Exporting large amount of data

    I need help.
    I have to export the results of a query (can be many, up to a million records) into a CSV file.
    Is it preferable to create a temporary file on the server and then serve it to the client or
    is it better to directly create the response while browsing the recordset?
    Thanks
    Marco
    p.s.: sorry for my poor english.

    Even I am looking for same, I want to create csv file inside the servlet , and sent the csv file to client using download
    option.
    Regard's
    Suresh Babu G

  • Exporting large amounts of jpegs to multiple cd's

    I have a problem exporting more th
    an one cd's worth of images- Light
    room spits out my first disc (with nothing on it) then asks for cd
    2 (or 3,4 or more) which burn fine. Then
    I try to re-burn the first cd to find Lightroom has tried to put
    too many images on it. I delete one, then it burns ok. It's re
    ally frustrating- any ideas?

    Export the images to local harddisk and then burn your CD's from there.
    Beat

  • Error while Exporting large data from Reportviewer on azure hosted website.

    Hi,
    I have website hosted on azure. I used SSRS reportviewer control to showcase my reports. while doing so i faced an issue.
    Whenever i export large amount of data as Excel/PDF/word/tiff it abruptly throw following error:
    Error: Microsoft.Reporting.WebForms.ReportServerException: The remote server returned an error: (401) Unauthorized. ---> System.Net.WebException: The remote server returned an error: (401) Unauthorized.
    at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
    at Microsoft.Reporting.WebForms.SoapReportExecutionService.ServerUrlRequest(AbortState abortState, String url, Stream outputStream, String& mimeType, String& fileNameExtension)
    --- End of inner exception stack trace ---
    at Microsoft.Reporting.WebForms.SoapReportExecutionService.ServerUrlRequest(AbortState abortState, String url, Stream outputStream, String& mimeType, String& fileNameExtension)
    at Microsoft.Reporting.WebForms.SoapReportExecutionService.Render(AbortState abortState, String reportPath, String executionId, String historyId, String format, XmlNodeList deviceInfo, NameValueCollection urlAccessParameters, Stream reportStream, String& mimeType, String& fileNameExtension)
    at Microsoft.Reporting.WebForms.ServerReport.InternalRender(Boolean isAbortable, String format, String deviceInfo, NameValueCollection urlAccessParameters, Stream reportStream, String& mimeType, String& fileNameExtension)
    at Microsoft.Reporting.WebForms.ServerReport.Render(String format, String deviceInfo, NameValueCollection urlAccessParameters, String& mimeType, String& fileNameExtension)
    at Microsoft.Reporting.WebForms.ServerModeSession.RenderReport(String format, Boolean allowInternalRenderers, String deviceInfo, NameValueCollection additionalParams, Boolean cacheSecondaryStreamsForHtml, String& mimeType, String& fileExtension)
    at Microsoft.Reporting.WebForms.ExportOperation.PerformOperation(NameValueCollection urlQuery, HttpResponse response)
    at Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context)
    at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
    It works locally (developer machine) or having less data. But it didn't work with large data when publish on azure.
    Any help will be appreciated.
    Thanks.

    Sorry, let me clarify my questions as they were ambiguous:
    For a given set if input, does the request always take the same amount of time to fail? How long does it take?
    When it works (e.g. on local machine using same input), how big is the output file that gets downloaded?
    Also, if you can share your site name (directly or
    indirectly), and the UTC time where you made an attempt, we may be able to get more info on our side.

  • Create new table , 1 column has large amount of data, column type ?

    Hi, sure this is a noob question but...
    I exported a table from sql server, want to replicate in oracle.
    It has 3 columns, one of the columns has the configuration for reports, so each field in the column has a large amount of text in it, some upwards of 30k characters.
    What kind of column do I make this in oracle, I believe varchar has a 4k limit.
    creation of the table...
    REATE TABLE "RPT_BACKUP"
    "NAME" VARCHAR2(1000 BYTE),
    "DATA"
    "USER" VARCHAR2(1000 BYTE)
    ORGANIZATION EXTERNAL
    TYPE ORACLE_LOADER DEFAULT DIRECTORY "XE_FTP" ACCESS PARAMETERS ( records delimited BY newline
    skip 1 fields terminated BY ',' OPTIONALLY ENCLOSED BY '"' MISSING FIELD VALUES
    ARE NULL ) LOCATION ( 'REPORT_BACKUP.csv' ))
    Edited by: Jay on May 3, 2011 5:42 AM
    Edited by: Jay on May 3, 2011 5:43 AM

    Ok, tried that, but now when i do a select * from table i get..
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-30653: reject limit reached
    ORA-06512: at "SYS.ORACLE_LOADER", line 52
    29913. 00000 - "error in executing %s callout"
    *Cause:    The execution of the specified callout caused an error.
    *Action:   Examine the error messages take appropriate action.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How can I edit large amount of data using Acrobat X Pro

    Hello all,
    I need to edit a catalog that contains large amount of data - mainly the product price. Currently I can only export the document into excel file and then paste the new price onto the catalog using Acrobat X Pro one by one, which is extremely time-consuming. I am sure there's a better way to make this faster while keeping the accuracy of the data. Thanks a lot in advance if any one's able to help! 

    Hi Chauhan,
    Yes I am able to edit text/image via tool box, but the thing is the catalog contains more than 20,000 price data and all I can do is deleteing the orginal price info from catalog and replace it with the revised data from excel. Repeating this process over 20,000 times would be a waste of time and manpower... Not sure if I make my situation clear enough? Pls just ask away, I really hope to sort it out, Thanks! 

  • Redistributing Large Amounts of Media to Different Project Files

    Hello
    I'm working on a WWI documentary that requires an extremely large amount of Media. I was just wondering, if (besides exporting hundreds of QT files), perhaps just capturing all of this media, (spread out), into several FCP Project Files?
    In the screen shot below, note that there are 2 FCP projects open at one time. Why? Because I thought it might help re-distrtibuting all the captured media into more than just one FCP Project file. Is this advisable? Would creating more FCP files containing more and media actually be better than capturing soooo many captured archival clips into just one main FCP Project file?
    Or
    Should I be exporting 'Non-Self Contained' files to be re-imported back into the Project?
    Screen Shot;
    http://www.locationstudio.net/Redistribute-1.jpg
    Thanx
    Mike

    It is absolutely advisable. This keeps project sizes down to manageable levels. This is one of the tips I give in my tutorial DVD on Getting Organized with Final Cut Pro.
    Bear in mind, that if your footage and sequences are in different projects, you cannot match back to the clip in the Browser. Once a clip is cut into a sequence that resides in a different project, the link from that media and the original clip in it's original project is broken. That can be a pain, so I try not to break up projects unless I have to. And LARGE projects I find that I have to.
    ALthough it can still be a pain. When you need to find the footage that you know is in the same bin as that clip...and you don't know what that bin is? Sorry...match back doesn't work. So, just a caviat.
    Shane

  • Is it wrong or why Firefox uses it as an extremely large amount of memory? I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6. Best regards Jonas Walther

    Is it wrong or why Firefox uses it as an extremely large amount of memory?
    I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6.
    Best regards
    Jonas Walther

    Hi musicfan,<br />Sorry you are having problems with Firefox. Maybe you should have asked earlier and we could have fixed it.
    Reading your comments I do not see that rolling back to an insecure Firefox 22 will actually help you much. You are probably best using IE, unless you have also damaged that.
    *[[Export bookmarks to Internet Explorer]]
    You should not use old versions they are insecure. Security fixes are publicised and exploitable.
    * [[Install an older version of Firefox]]
    * https://www.mozilla.org/security/known-vulnerabilities/firefox.html
    Most others will not be having such problems. We are now able to say that with confidence because after developers missed a regression in Firefox 4 telemetry was introduced so that data was obtained. It may be an idea to turn on your telemetry, if you have not already done so, and decide to stick with Firefox.
    *[[Send performance data to Mozilla to help improve Firefox]]
    Trying safe mode takes seconds. Unfortunatly if you are not willing to do even rudimentary troubleshooting there is not anything we can do to help you.
    *[[Troubleshoot Firefox issues using Safe Mode]]

  • Firefox is using large amounts of CPU time and disk access, and I need to know how to shut down most of this so I can actually use the browser.

    Firefox is a very busy piece of software. It's using large amounts of CPU time and disk access. It puts my usage at low priority, so I have to wait for some time to be able to use my pointer or keyboard. I don't know what it uses all that CPU and disk access time for, but it's of no use to me. It often takes off with massive use of resources when I'm not doing anything, and I may not have use of my pointer for several minutes. How can I shut down most of this so I can use the browser to get my work done. I just want to use the web site access part of the software, and drop all the extra. I don't want Firefox to be able to recover after a crash. I just want to browse with a minimum of interference from Firefox. I would think that this is the most commonly asked question.

    Firefox consumes a lot of CPU resources
    * https://support.mozilla.com/en-US/kb/Firefox%20consumes%20a%20lot%20of%20CPU%20resources
    High memory usage
    * https://support.mozilla.com/en-US/kb/High%20memory%20usage
    Check and tell if its working.

  • Creation of data packages due to large amount of datasets leads to problems

    Hi Experts,
    We have build our own generic extractor.
    When data packages (due to large amount of datasets) are created, different problems occur.
    For example:
    Datasets are now doubled and appear twice, one time in package one and a second time in package two. Since those datsets are not identical, information are lost while uploading those datasets to an ODS or Cube.
    What can I do? SAP will not help due to generic datasource.
    Any suggestion?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Open Large amount of data

    Hi
    I have a file on application server in .dat format, it contains large amount of data may be 2 million of records or  more, I need to open the file to check the record count, is there any software or any option to open the file, I have tried opening with Notepad, excel .... it gives error..
    please let me know
    Thanks

    Hi,
    Try this..
    Go to AL11..
    Go to the file directory..Then in the file there will be field called length..which is the total length of the file in characters..
    If you know the length of a single line..
    Divide the length of the file by the length of single line..I believe you will get the number of records..
    Thanks,
    Naren

  • Crystal Reports 10 Enterprise Server Issue - Exporting Large Reports to PDF

    Weu2019re implementing a Crystal Reports 10 Server for a customer but weu2019re having an issue with exporting large reports to PDF.  The report size that starts to show this issue is roughly 300 pages with 20 records per page.  We use the default Crystal ActiveX viewer. 
    Basically what happens is that the user will launch the report in the ActiveX viewer and the large report will take a while to load, as much as 3 to 4 minutes.  The user may then wish to export to PDF.  Once they attempt it with these large reports the viewer will basically hang and possibly be processing but will then time-out after roughly 5 minutes with this error:
    u201CProxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request POST /thinqreports/THINQCrystalInterface.asp.
    Reason: Error reading from remote server
    IBM_HTTP_Server/6.1.0.9 Apache/2.0.47 (Unix) Server at lmsproxy.aurora.org Port 443u201D
    We believe this error is coming from a server on our network.  We have a proxy server between users and the Crystal Server that has a 5 minute time-out for connections.  What we want to know is:
    - Is there a way to speed up this export to PDF process?
    - Should the Crystal Viewer be keeping an http connection alive to the Crystal Report Server while its doing this export process?
    - If not, what port does Crystal Viewer use to communicate with the Crystal Report Server? 
    - Is there something we can set in Crystal to keep an http connection open? 
    - What is actually occurring during this export process? 
    Occasionally, we may get the above error when simply launching a report, not just attempting to export to PDF.  We presume that is because loading the report into the Crystal Viewer from the Crystal Report Server is taking longer than 5 min.
    Anyone have any thoughts?  Thanks.
    Scott

    Scott,
    As this is happening within Enteprise (and I am assuming the report does not time out in the Crystal Reports Designer) I would suggest posting this to the BusinessObjects Enterprise Administration
    forum.
    My limited Enterprise knowledge would be to suggest scheduling the report as PDF but I don't know if that is a viable option for you.

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

Maybe you are looking for

  • Calculation error while doing PO

    Dear Experts, I am doing PO for without credit conditions. I have made condition records for conditions JMIP,JEC2,JSEI,JVRN,JMX2,JEX2 & JHX2 Conditions.The tax rates are BED w/o credit-8%,ECESS w/o credit-2% & SECESS-1%.Gross price is 1000 INR.So acc

  • Upgrade Tomcat From 4.1 To 5.5

    I've got a Solaris 10 (8/07) installation that has come with Tomcat 4.1 as a default. We are in the process or writing our own web service that will use Tomcat. However I've been told by our developer that I must upgrade Tomcat to the latest (5.5.29

  • Map network drive from windows 98 to server 2008 R2

    Guys I have some old Windows 98 systems these 98 systems are attached to pieces of equipment on the warehouse floor. So with that said there is no way to upgrade them. I would like to map a network drive from these 98 systems to a 2008 R2 file server

  • QuickTime Save to Web, HTML5

    I am working with placing html5 video into a webpage. I have it working great on laptop/ netbooks, desktops etc... however i also would like it to play on iphones, ipad, ipod, android etc... I want to use an HD'ish video (ie 900x506), and i cant not

  • Client independent and dependent

    Hi All, What are the Client dependent objects and independent objects in Financial Accounting and Controlling? Urgent please Thanks and regards Krishna