HowTo? learn about converting from Audition 1.0 scripts to "Effects Rack" + Batch?

Rusty with CoolEdit / Audtion 1.0. Newbie with CS6 trial.
I've got a lot of human speech tracks to clean up. Previously, I used the scripting capability of Audition 1.0 with the following "chain of effects":
Convert to mono + 22050 hz
Normalize to 90%
ClickFix plug-in
Noise reduction
Truncate long silences
Amplitude + Dynamics processing to "Level"
Re-normalize to 90%
Add 2 sec of silence at end
Output as 24kbps / 22050 .mp3
My understanding is that Audition CS6 stopped supporting "scripts" several versions back. Bummer. My speculation is that the equivalent can be accomplished with the "Effects Rack" + "Batch Process". Or not?
I'd appreciate a link to a tutorial or article that discusses "best practices" to accomplish the equivalent of "Scripts".

Hmmmm ... it may be that my best "path forward" is to continue with AA1 --> AA3 free upgrade. I really, really like being able to use some fairly elaborate scripts to accomplish "chain of effects" on hundreds of human speech files.
However, I am really interested in "Adaptive Noise Reduction". Apparently it samples the first 2 to 5+ seconds of the track, and builds an on-the-fly.fft NR profile for that specific track. So far, for me it sometimes works, and sometimes fails so that the result sounds worse than the noisy original. Bummer. Could be newbie PEBKAC as I wrestle with the learning curve.
WIthin the CS6 trial, I may attempt a "hold your nose workaround" combining AA3 scripting for about 100+ human speech files :
Convert to mono 22050 .wav's
Normalize
ClickFix
Then "hand off" the batch to CS6-trial for
Dehummer
Adaptive NR (if I can get it to work well consistently)
Truncate long silenses
Dynamic leveling / comprand
Re-normalize (which doesn't seem to be available as an effect)
Add silence (if available)
Output as 24kbps .mp3
Or it may be that CS6 batch/scripting means I'd have to "hand off" .wav results from CS6 back to AA3. Seems pretty lame.

Similar Messages

  • Questions about converting from Photoshop to Lightroom

    Hi! I have been a Photoshop user for many years now, but after a lot of thought, reading many reviews and much reccommendation, I have decided I would like to buy Lightroom! However, I have absolutely no idea how I go about doing it. I thought it would be simple but after looking into it, people have made it seem much more complicated than I thought. SO, here are a few qustions which I hope somebody can answer!
    1. Do I already need a version of Lightroom before I can buy Lightroom 4 or 5?
    2. Do I buy it on a disk, or do I download it off the Adobe website? Which is the better option?
    3. I'm currently on windows 7, however I'm going to be buying an iMac next year. Will I have to buy another version of Lightroom next year? Or can I install it on multiple devices?
    4. Why is Lightroom 5 so much cheaper than Lightroom 4 on Amazon? (not the update versions)

    The Upgrade from a previous version is a lower price, but the upgrade price should clearly say UPGRADE.
    If it says FULL version, you won't need to own previous versions of LR
    Looks like Adobe is offering LR 5 for $40 off on the FULL version right now

  • I'm thinking about converting from a PC to a Mac but....

    Okay so the laptop i have right now is a Toshiba Satellite PC. I do love it to death but I hate that there are a million and a half viruses for it >.< I'm getting into Graphic Design and computer gaming more. Right now I have a ton of PC software and I've heard that Mac can read almost all software. So I'm wondering if some of my PC only games will run on a Mac as long as it has Windows XP on it? The only reason I'm asking is because I'm about to buy a small bundle in Adobe Web Bundle..My possible future Mac will be farther along before I buy one. I'm a college student so I want to wait, save, and try to buy it myself. SO I guess my question is....would it be worth it for me since I have soooo much PC software..<br>
    >.> a PC   Windows XP  

    well I don't hardcore game or anything but I do play online games (I.E. Guild Wars, World of Warcraft, Black and White)My newest passion has been making graphics...hence why i'm getting the adobe web bundle ...I love computers and my boyfriend is also into computers. He was going to college for computers but he's not sure if thats what he truely wants to do with his life. That's why I'm asking about switching over to a Mac. The Mac I want is the really nice black one...mmm...but since I just bought this computer last year it's going to be a very long while until i get my desktop Mac (saddness). So I won't need any other software to install the PC programs? Like something other than xp. I don't really need to worry about losing my discount because my mom is also an educator so either way I et my discount =D
    >.> a Pc   Windows XP  

  • Are there problems converting from Adobe Bridge to Organizer?

    HI,
       I've searched for information on this and haven't found anything so maybe there aren't any problems :} but...Murphy and I are good friends so I have to ask if there are problems.
    I currnetly run Elements 8 on a MacBook and have about 2000 pictures in Bridge and some in Iphoto. I use bridge for pictures for web design and Iphoto for family stuff.  I 'help out' with the computer stuff at a cat shelter in town and they have just bought Elements 10 and run the PC version.  I am thinking of upgrading to 10 so I will be familiar with their version but am worried about converting from bridge to organizer. Are there any known problems?  Is there a tutorial some place about the conversion from bridge to organizer.
    Thanks for any ideas

    You have no pictures in Bridge. Bridge is just a browser; it just shows the current state of any folder to which you point it, so that's no problem. If you really want to use organizer, it will pick up your metadata keywords when you import the photos. Photos from iphoto will be duplicated on import to the organizer (normally it just makes a dbase pointer to the existing photo) to avoid inadvertently writing into the iphoto library, which can corrupt it and cause loss of the photos it contains.
    You will lose any stacks in bridge when you import the photos into organizer.

  • Converting from QuickTime movie to AVI

    I see a lot of questions about converting from .avi to Quicktime but how do I accomplish a conversion from a Quicktime .mov format to an .avi format?
    The reason I ask is because I emailed someone a QuickTime movie and he could not view it and asked for me to send an .avi. I 'think' he's on a PC. Any suggestions? Thanks.

    You can use this freeware to make the conversion:
    Mpeg StreamClip
    or you can buy the QT Pro license and use it. With Pro you open the mov file and use Export in the file menu to make the conversion. More details on using Pro are in the QT 7 guide available here:
    http://images.apple.com/quicktime/pdf/QuickTime7UserGuide.pdf
    eMac 700 mhz   Mac OS 9.2.x  

  • What is the SAFEST SEQUENCE to convert from a Outlook/iPad/iPhone synced with MobileMe to syncing with iCloud (I have 10 years of calendar diary events and 3000 contacts) - I am worried about the data issues that have been posted about iCloud.

    What is the SAFEST SEQUENCE to convert from a Outlook/iPad/iPhone synced with MobileMe to syncing with iCloud (I have 10 years of calendar diary events and 3000 contacts) - I am worried about the data issues that have been posted about iCloud.
    This has worked fine with MobileMe with only a couple of minor glitches in the past.
    Any experience doing this the "right" way?

    The warranty entitles you to complimentary phone support for the first 90 days of ownership.

  • I have an iMac running Mavericks 10.9.4 I also have about 30 Kodack photo CDs that I wish to convert from PCD to Jpeg but Mavericks will not see the files, they are grey.  Is there any way to convert them?

    I have an iMac running Mavericks 10.9.4 I also have about 30 Kodack photo CDs that I wish to convert from PCD to Jpeg but Mavericks will not see the files, they are grey.  Is there any way to convert them or must I do it on another machine?
    Kaibil

    iPhoto used to support the PCD format up to a few years ago. It's extremely frustrating that Apple decided to no longer support this format and now I'm directed to pay nearly $40 for an app that will (GraphicConverter 9). Did Lemke Software make a deal with Apple so we'd have to pay a premium for a very overpriced converter? And, how are decisions like this supposed to encourage the use of iPhoto?
    Apple users already pay a premium to use a Mac. Obstacles like these eventually destroy loyalty to the Apple brand. Just ask Microsoft how their back room deals that exploited their customer base worked out for them.

  • What is the best way to learn about galaxie s3 from novice viewpoint?

    what is the best way to learn about galaxie s3 as a complete beginner?  nothing that is taken as a given by other discussions is known by me.  i instantly get lost.

    Do you already have the phone, or are you planning on getting the phone.
    You could sign up for a free workshop at one of your local Verizon stores. You could read the owners manual. You could peruse thru the topics on THIS Verizon site.
    I thought there was a thread which gave a bunch of tips/tricks for the GSIII, but I can't seem to find it. There IS one for the Galaxy Note II which MAY be helpful with the GSIII, though.

  • About encoding convertion from ISO8859-1 to GBK

    Hi,
    i tried to convert a string of 000"testline"111 from iso8859-1 to gbk.
    here is the code:
    // db stores data in iso8859 encoding
    String contractname = rs.getString("contractname");
    // change it to gbk
    String contractnamegbk = new String(contractname.getBytes("ISO8859-1"),"GBK");
    my test page uses GBK charset
    and when i display it on the page using <%=contractnamegbk %>
    it only shows me 000 missing the "testline"111
    anyone knows what the problem is?
    thanks very much

    man i still have problem converting from 8859 to gbk
    i think string is just string, it represents that
    single piece of data wherever it is stored, on
    windows,unix or other systemIn Java a String contains UNICODE characters. I say again, UNICODE characters.
    >
    the difference is that system might use different
    encoding to read what this data is,so they
    getbytes(in some encoding) hoping to know what it is.No! getBytes() converts the UNICODE characters contined in the String to an array of bytes using the encoding specified. If you don't specify an encoding it take the default encoding which is platform and Locale delendent.
    >
    i use this
    String gbkstring = new
    String(paraStr.getBytes("ISO8859-1"),"GBK");
    i think data transferd from a iso8859 database to my
    testpage, and i should use the same way to decode it
    to what it is and i then use GBK to store or display
    it.NOI NO NO! As I said in my first post, this first creates an array of bytes which are the ISO-8859-1representation of the characters in the String and then pretends that these are GBK encoded and tries to create a String from them. It does not convert ISO-8859-1 to GBK. I say again, it does not convert ISO-8859-1 to GBK.
    >
    and it works.but when i tried to convert 000"test
    word"111 it still displays 000 missing the rest
    I don't understand what you mean here.
    The bottom line is that when you get the String from the database, whatever the coding was in the database, once you have it as a Java String the content is UNICODE characters. If the database stored it as ISO-8859-1 then hopefully the JDBC driver performed the correct conversion to turn it into UNICODE.
    Java Strings contain UNICODE characters. Java Strings do not contain ISO-8859-1 characters. Java Strings do not contain GBK characters. Java Strings contain UNICODE characters.

  • Convert from PSE10 to Lightroom - metadata concerns

    After one too many frustrations with PSE, I've been looking for an alternative program.  I don't care about editing - metadata (Organizer type features) are my focus.  I looked at a few non-Adobe programs - a big problem with them is I can't import my PSE10 catalog, with 12,000 photos.  On the PSE forum, the most common advice was to go to Lightroom, so I was holding out hope for that.  However, I've played with the LR 5 trial for half a day now, and as far as I can tell, it doesn't make good use of the PSE catalog either!  This is very disappointing for me, but I'm wondering if some of you folks can show me things I'm missing.
    I'm looking into to converting from PSE10 to LR 5 on a Windows 7 machine.  I have about 12,000, half TIFs from scanned images and half JPGs from digital cameras.  As I said, I only care about metadata management.  My focus is genealogy, so getting the original dates of scanned pictures captured is very important, and these often are only roughly known.
    My main issue is that it seems to me that LR is taking the metadata from the photo files, not the catalog.  I used the LR feature "Upgrade Photoshop Elements Catalog".  The reason I say this is for two reasons:
    (1) When PSE10 writes tags to files, it adds to any tags that are already there.  So although PSE10 shows only the updated set of tags in the GUI, the photo files have both old and new tags.  After doing the LR Upgrade PSE Catalog, I'm seeing old and new tags.  It appears to be ignoring what is in the catalog and looking at the file.  (As far as I know, the old tags are not present in the PSE catalog anymore, so it can't be looking at the converted catalog.)
    (2) PSE10 allow you to tag photos with incomplete dates (e.g. no time, no day and/or no month).  But if PSE10 has an incomplete date, it won't write it to the file (very annoying for working with historical photos).  But LR does not show these partial dates, it only shows complete dates.  In this case, I'm not sure if LR is showing the metadata in the photo files, or if it dropped the PSE10 catalog date info during the "Upgrade".
    So my main question is, am I missing something with regards to the import, or am I correct in my impression that this "Upgrade PSE Catalog" isn't doing much of anything?  (ExifTool shows all the metadata in the photo files -- much more complete than LR for that matter -- so what am I gaining by importing the PSE catalog into LR?)
    A couple other miscellaneous questions if I may:
    -- In PSE10, there is a notes field in the metadata, which goes into XMP-album in the photo files.  Does LR not support this?  As far as I can tell, LR supports only a few XMP namespaces, with no support for others.
    -- In LR in the Folder pane, I have no scroll bar, so I can only see the first few entries.  This seems to be a serious bug to me.  Or am I missing something?
    -- It appears in LR that one cannot show your entire catalog in the main pane sorted by folder then by file name.  Is this correct?  (This also seems to be a major drawback to me!)
    -- When I try to change the date of a photo (Edit Capture Time in LR), it forces you to enter a complete date/time, unlike PSE10.  Overall, it is much easier to manage these dates in PSE, it seems.  Does LR handle incomplete dates to any level?
    -- When I did my PSE import/"Upgrade", it assigned the wrong file names to some of my photos (it used file names from other photos in the catalog).  Is this a known issue?
    I imagine I have a very unusual vantage point, but at this point it seems that PSE is far superior to LR!  And I'm not happy with PSE10!!  Anyways, thanks in advance for your input.  Sorry about the long post - wasn't sure if I should divide it up or not.
    Bill

    This is a reply to the post from John R. Ellis...
    You mentioned "Adobe has never bothered to fix...".  Note that the last post by Michel B in this thread
    http://www.elementsvillage.com/forums/showthread.php?t=78779&page=2
    indicates that PSE11 can handle this.
    You say "After importing.... I used ExifTool to append..."  Did you do this file by file, or with a script?  A Perl script?  (The reason I ask is that I would have to learn Perl, so I would have to decide how much this means to me...)
    Regarding your suggestion to use fake dates
    "(e.g. "1970" => "1/1/1970 00:00:00")"
    I've been aware of this since I read it in your psedbtool material a year or two ago, but I've resisted making such changes.  When dealing with historical photos, you want the ambiguity.  In fact, you'd like to use things like "About 1970" and such.  I wish the photo metadata community would get on board with that.
    Regarding your point about catalog conversion, for me it sounds like it would be just as good to import the photo files directly.  (Better for me, I think, since converting the catalog seems to lose the folder hierarchy in the LR Folder view for some reason.  However, if I'm going to import the photo files directly, then LR has no advantage for me over any other program that I can see.  The ExifTool GUI utility does a much better job with the metadata in general - the key drawback for me being the fact that you have to type out the keywords each time you want to add one to photo.  Looking at the ACDSee Pro trial, that appears to handle metadata better than any of the Adobe products - the critical drawback being that it uses its proprietary XMP namespaces, meaning interoperability issues.  LR has drawbacks for me, such as not being able to sort by folder than filename in the main view pane and the lack of support for other XMP namespaces (and the Folder pane scrollbar thing).  So at this point I'm in a state of depression.  As far as I can tell, I only have a partial solution to my problems - upgrade from PSE10 to PSE11.  Truly a depressing notion!!

  • Send me link to learn about LSMW - Direct input

    hi
    Pls send me link to learn about LSMW - Direct input

    Hi,
    See below links
    http://www.sap-img.com/sap-data-migration.htm
    http://www.sapgenie.com/saptech/lsmw.htm
    http://sapabap.iespana.es/sapabap/manuales/pdf/lsmw.pdf
    http://66.102.7.104/search?q=cache:-DE9K5Tj0j8J:www.scmexpertonline.com/downloads/SCM_LSMW_StepsOnWeb.doclsmwdoc&hl=en
    Steps to create
    Create project, Subproject, Object (freely definable, for the purpose of
    identification) – Use create icon in the menu. For example
    Project FI Finance
    Subproject GL GL MASTER
    Object GL GL Master
    2. Continue F8, Now you will see series of steps. Select the step and execute
    it (Ctrl-F8). In each steps, we will do some activity and save it. Then
    system prompts for the next activity.
    3. Maintain object attributes. By default the screen in display mode, Click
    on ‘pencil button’ to get into change mode. (This is applicable to upto the
    activity assign files). In this screen select ‘Batch input recording’ and
    in ‘Recording’ give name (freely definable) sal ‘GL01’ and click on Overview.
    In this screen button (Recordings overview). Now click on create button for
    recording. Give name and description, Now it will ask for the transaction
    code, enter the transaction code and enter the data, the way you want fill in
    the SAP screens/fields with legacy data. Then save the transaction. Now save
    again to save the recording. After that you select the button ‘default all’,
    then save it. And come back, now the cursor is on next activity.
    4. Maintain source structures. Here create the structure by specifying name
    (freely definable).
    5. Maintain source fields. In this screen, click on ‘copy fields’ (Ctrl+F8),
    here select ‘From data file(field names in 1st line)’. Now the system asks
    for the file, select your text file, the system will copy the fiedls from
    text file (tab delimited file. It contains field names in the first row).
    Save it.
    6. Maintain structure relations. Generally, we need not do anything here.
    7. Maintain field mapping and conversation rules. Here put the cursor on the
    field and click on source field, now you can see the list of field available,
    from which select one which is equal to the field on which you out the
    cursor. You can also write coding (ABAP Code) on any field for the purpose of
    validation of data. Save it.
    8. Maintain fixed values, translations, user-defined routines. Here also you
    can maintain values, if required.
    9. Specify files. In this screen, place the cursor on Legacy data, then click
    on create button. The system asks for the file name, (the text file, in which
    the legacy data is there), enter the file name, description and select Data
    for one source structure (table), delimiter as ‘Tabulator’, file structure
    as “Field names at the beginning of the file”.
    10. Assign files. The file name you specified in earlier step comes
    automatically here, If the file name does not come, then click
    on ‘assignment’ button and assign the file.
    11. Read data. Here specify the ‘transaction number’ (the number not less
    than the number of rows in the text file). Execute it.
    12. Display read data. Here, you can see the values, how the system read the
    data.
    13. Convert data. Here specify the ‘transaction number’ (the number not less
    than the number of rows in the text file). Execute it.
    14. Display converted data. Here, you can see the values, how the system
    converted the data, after interpreting conversion rules specified in the
    earlier steps. This is the data that is actually goes into SAP, once the
    Batch Input session is run.
    15. Create batch input session. Specify the ‘batch input session’ name for
    identification and also select the ‘keep session’.
    16. Run Batch Input Session. Select the batch input and click ‘Process’, from
    there select ‘foreground’ or ‘background’. Now the program runs and update
    the data in to SAP. You can analyze the batch input for the successful
    postings, errors.
    Regards,
    Omkar.

  • Bug? Accessibility Tags Converting from Word 2007

    This seems like a minor issue, but it's one that could create a lot of frustration for a disabled person using a screenreader to read tabular data in a PDF.
    As you know, Acrobat plays nicely with Office apps allowing users to create tagged (structured), accessible PDF documents from MS Office files. I just created a simple docx file with a table (attached), and when I converted it to PDF, I noticed a difference in the tags it creates compared to conversion from Word XP. As you see in the Word file, the table is very basic, except that one of its column headers is split into two cells. This is actually a very common technique for presenting table data. In order to automatically tag the header rows as table header cells <TH> in the PDF, I set the first two rows to "Repeat Header Rows."
    Converting from Word 2007 with the "Save as Adobe PDF," or any other method that uses the Acrobat plugin, creates a tag tree that is missing a <TH> tag. I found the problem when I was testing a file with JAWS screenreading software. Using the JAWS "current cell" command (Ctrl-Alt-Numpad 5) to announce the column headers. It reads the wrong header for the current cell due to the missing <TH>. So, in my example file, it announces $2 and $5 as 2010 amounts rather than 2009. That could be pretty confusing to a screen reading user, to say the least.
    I then compared the result to the new Word 2007 "Save as PDF or XPS" feature. That feature tagged the file properly and the header columns match up.
    Compare the attached "save-as-adobe-pdf.gif" to"save-as-pdf-xps.gif". Note the empty (but necessary) <TH> tag in the latter image.
    Just as a sanity check I had a coworker with Word XP convert the file. Those tags were correct too. So, this must be a problem between Acrobat and Word 2007.
    Anyone have other observations on this? I'm going to be leading some accessibility training and right now, it looks like using the Word 2007 conversion feature is the way to go.
    I'm using Acrobat 9 Pro.
    Thanks,
    Joe

    Hi Joe,
    I sense your frustration. For any organization that has to or wants to engage in providing accessible online information
    a serious logistics support issue raises its head. To do PDF, HTML, whatever the proper way (and it can be done)
    requires more resources (training, knowledge, hardware, software, changes to work flows, perhaps some more staff).
    The is no "work smarter with less & pump out more" in this venue.
    Yes, it is helpful (and necessary) to "be one" with the S508 "paragraphs" - WCAG 1.0 - WCAG 20.
    However, once anyone begins to provide PDFs that must be "accessible" the first, single most important reference is ISO 32000.
    The Adobe PDF References that preceded PDF becoming an ISO Standard are useful; but, ISO 32000 is the standard.
    In this documentation there is full discussion of what *must* be done to provide an accessible PDF.
    Without a firm understanding of this content, other information tends to bring about a defused opacity of focus which can
    contribute to major conceptual errors vis-a-vis accessible PDF.
    Leonard Rosenthol's AUC blog entry provides a link to the ISO permitted Adobe version (free) of ISO 32000-1.
    http://www.acrobatusers.com/blogs/leonardr/adobe-posts-free-iso-32000
    Additional, useful information is found in these two documents:
    (1) - PDF Accessibility API Reference (from the Acrobat SDK)
    https://acrobat.com/#d=J7bWW4LvNoznh8fHCsfZDg
    (2) - Reading PDF Files Through MSAA
    https://acrobat.com/#d=uIOvkVTP74kag3bXFJLIeg
    About JAWS - Yes, much used. However, not the exlusively used AT application.
    If I use Windows Eyes, NVDA, a braille reader, or something else then what?
    JAWS *does not* define "it is accessible"...
    re: (1)
    "Game away and if it ...."
       Consider "Stop before right on red".
       "Compliance" is Stop on Red - Turn Right
       "Intent" (aka usability) is Stop on Red -  Look Good for on coming traffic that has the right of way - Yield - when clear, turn right.
    But, at least we are not talking about "left on red" 8^)
    re: (2)
    Just an observation. A defective product that claims to be "whole" can get entities (individuals/businesses) into a sticky wicket.
    Putting a high volume of defective products on one's selves only increases the probability that one gets 'busted'.
    Quantity replacing Quality just is not a success precursor.
    Case in point - Target and the national class action legal action that was taken against it with regards to "accessibility" of online information/services.
    Resolved now - see NFB's web site.
    re: (3)
    Ah, but what would Judge Judy or Judge Marily say?
    Efficiency does not preclude providing a "whole" product.
    I doubt that there will ever be a seamless "one-click" between products of any of the dominant software houses.
    They are intense competitors. That this is the case does not abrogate others from providing a "whole" product, no?
    So, if the organization wants the "we do accessible PDF" label then it pays the freight - Adobe Pro, training, appropriate work flows, etc
    that permit delivery of PDFs that meet the standards for what a well formed tagged output PDF is (accessible is a sub-set of this).
    For PDF there is no other way.
    If this cannot be done then there is always HTML as an acceptable method (to some it is the preferred and only "true" way).
    However, HTML, done "right" for accessiblilty is just as demanding in its own way.
    With each AT version / dot version release, JAWS - Windows Eyes - NVDA & others hone in closer on utilizing PDF ISO Standard 32000.
    That means if you deploy "accessible" PDF you need to provide PDF that live to the ISO standard.
    Keep in mind that S508's paragraphs began when, effectively, HTML was "it". In software terms that was geologic eons ago.
    For contemporary AT to effectively parse PDF, the PDF must be a well formed Tagged PDF having a format/layout that reflects a logical hierarchy.
    Creation of all this must start in the authoring environment with the content author.
    The post-process PDF output then assures that the PDF elements (tags) are the correct type, have the requisite attributes, etc.
    Without this, AT will not be able to provide the end-user effect utilization of the PDF.
    So, for AT to properly 'work' the PDF, <TH> elements *must* have the Scope attribute's value defined, Row and Column Span values defined, etc.
    Scope, Row Span, Column Span, Table IDs and Headings must be added as part of the post-processing of a PDF using Acrobat Professional.
    An alternative is the Netcentric CommonLook plug-in for Acrobat Professional. What it does, Acrobat Pro can do; however, the CommonLook
    provides a robust user interface. Downside: at some $1k per seat it is not 'cheap' and it has a *steep* learning curve (Sitka Pass?).
    Two table related resources are at this AUC thread (in post 3 and 4). They may be of some usefulness.
    http://www.acrobatusers.com/forums/aucbb/viewtopic.php?id=23178
    When the "smelly stuff" gets feed into the maw of the fan it's prudent to not be directly down stream, eh.
    Consider Target and the situation they put themselves in.
    Consider submittal of accessible PDF to fedgov or stategov agencies.
    They won't be in front of the fan if usability of the PDFs becomes an issue.
    Rather, it will be those submitting. After all the agency did say "accessible".
    Better to slow down and do it right or ramp up resource loading to support "schedule" than to stake oneself out as someones "feed" tomorrow, no?
    In the final analysis, for PDF, HTML, or any 'format',  Accessibility is the Usability + Compliance.
    Does it take improvements in professional development/training, adequate hardware/software, *time*?
    Yes. But, it all comes down to "where the rubber hits the road" - what tires are you on?
    It can be done. I do it one small step at a time every day. Often, that's what it takes.
    Deliverables are provided; but, with no mis-labeling and the incremental progress is identified, celebrated and the whole thing continues until
    the "road" is completed properly.
    Don't want wash outs, bridge collapse or what not tomorrow <g>.
    (But then I'm a fan of "Holmes on Homes" which may go a long way towards understanding my point of view when it comes to accessible PDF.)
    re: function(){Return ....
    Good question.
    My guess - either from the cut & paste I initially performed from the application I'd been using to assemble write up and screenshots or something associated with the Adobe Forum application.
    It can't be that I'm 'special'; if that was the case one of my occassional lotto quick picks would have been a big $ winner long ago <G>.
    fwiw -
    You'll find a number of "Accessible PDF" related resources in the threads at the AUC Accessibility Forum.
    http://www.acrobatusers.com/forums/aucbb/viewforum.php?id=18
    Two Accessible PDF related on demand eSeminars are also available.
    Look for Duff Johnson's and Charlie Pike's (on page 2) eSeminars.
    http://www.acrobatusers.com/learning_center/eseminars_on_demand
    Be well...

  • Converted from Final Cut 7 to Premiere Pro CC

    Hello totally awesome community of hard working people who make cool things!
    I am converting from Final Cut 7 to Premiere Pro CC. I've been in for 1 day and am impressed that many of the shortcuts are the same.
    I have much to learn and am studying every day.
    Do you have any recommendations for any great free video tutorial series on this software?
    Thanks!
    Check out the film I made with Final Cut 7. I'm excited about Premiere Pro CC because it will save me a lot of time!!
    Beyond the Music: Soundscapes, El Sistema and the Proven Power of Music | Video Portal (El Sistema USA)

    Hi soundscapes,
    Here you go: http://tv.adobe.com/product/premiere-pro/
    There are more on paid websites.
    Thanks,
    Rameez

  • I have a new iPhone 5S.  While trying to learn about it, I accidentally recorded a voice memo with no content.  I cannot now figure out how to get rid of it.  There is a banner across the top of my phone with this memo which I don't want.  Help!

    I have a new iPhone 5S.  While trying to learn about it, I accidentally recorded a voice memo with no content.  I cannot now figure out how to get rid of it.  There is a banner across the top of my phone with this memo which I don't want.  I have deleted it from iTunes but cannot get it off the phone.  Help!

    The banner usually indicates that the memo is "Paused." If you go back into voice memos, touch the word "Done" beside the big red pause button, give it a name, then it will show in a list. Touch the memo in the list then touch the trash can icon that should appear.

  • Network storage drive...where can I learn about these?

    I have so many questions, and frankly, the answers I've seen looking (briefly, admittedly) around these threads scare me.  Gee, Wally, what ever happened to Apple as the computer you didn't have to know computer programming to use?
    Now, before you all get your panties in a bunch telling me that I'm responsible for knowing how to use my hardware, I know...and I do want to know.
    So what I want to know now is how can I learn about the various types of "shares", and what they mean for hooking up a network drive for backup, etc.
    I have a WD 1TB drive which I believe uses "SMB", whatever that is.  It's got two folders on it currently, "Public" which I can access, and "WD Backup", which I can't.  I also can't seem to use it as a Time Machine drive, which is what I want to do with it.  And, or course, I can't re-format it, which I'd love to do as my wife hates the folder designations on it, and would like to have one folder called "iMac Backup" on it.
    Answers which don't involve Terminal will get extra points, but I've dealt with Terminal before, so take your best shot.  And ask for clarification if you need it.  Thanks.

    Unfortunately, networking and file-sharing aren't quite so simple in part because there's decades of history and dozens of companies involved in getting it into the current state. If you want dead-simple, you can get an Apple Time Capsule or AirPort with an external drive and be done with it. However, if you really want the most from network storage, you'll probably want to read up...
    WD make a number of drive, some network-attached, some not. We'll assume that you have one of their basic consumer network-attached drives.
    "SMB" stands for "Server Message Block" protocol and it was the local area network file-sharing protocol developed for Windows for Workgroups circa 1992 (it was later renamed CIFS - Common Internet File System - by Microsoft in 1996 because of the rising popularity of the Internet, despite the difficulty in getting it to work in anything but a small local network). SMB/CIFS is still widely used today for Microsoft Windows networks, though much of the equipment that serves those files are built on Linux or FreeBSD and not Windows.
    Access to shares on an SMB server (such as your hard disk) is controlled through a configuration utility provided with the drive or via a web application built into the device (that you access through your browser). Consult your manual. Generally speaking, however, SMB servers offer different ways of identifying whether or not you can access a share: 1) based on the computer connecting (in which case, everyone is considered the same user), or 2) based on a username/password combo. Generally a server will only operate in one mode. Shared directories can also allow or deny "guest" access (access without a password). All of this should be configurable through the drive utility or web application which you access with your web browser. Generally, an SMB server can only operate in one mode.
    It sounds like the Public directory is configured to allow guest access, and I'm guessing that "WD Backup" is a share that's password-protected and intended to be used with Western-Digital's own software of the same name. You should be able to add and create new directories as necessary through the provided utility or the web interface.
    You are correct, you cannot use Time Machine with an SMB file-sharing service. Time Machine will require AFP (Apple Filing Protocol) or NFS (Network File System) support. Western-Digital only has 3 network drives that support AFP and Time Machine: My Book Live, MyBook World, and WD ShareSpace. OS X also has some basic requirements about the performance and capabilities of network storage in order to use it with Time Machine, so you really want to look for drives that state up front that you can use Time Machine with them (for example, WD World Edition: http://www.wdc.com/wdproducts/updates/docs/en/appletimemachine.pdf )
    I'm not sure why you can't reformat the drive. This is supported on all WD products (though, if you've moved a bunch of data to it already, perhaps it would be a hassle).
    You can do backups of your Mac over SMB, but it's complicated by the fact that SMB is quite old and isn't capable of storing information about the file permissions, ownership, etc. It will be reliable for data files, but not for applications, etc. There's a work-around, of course. You can create a disk image on the SMB share using Disk Utility (make sure you create an HFS+ image) and backup to that. If you go that route, I would suggest either Carbon Copy Cloner or SuperDuper to perform your backups. They are true backup tools, not versioning tools like Time Machine.

Maybe you are looking for