Being more efficient with effects

Hi,
I typically have 3 layers of video. The first is of  a news broadcaster on the left of the screen, in the background is a static backdrop and on the right are images, maps etc whatever the broadcaster is talking about. Now, the images/maps video layer always has the same effects (basic 3D, bevel edge, size and position). At the moment I am applying these effects to every image/map. Is there any way to be more efficient, for instance I could assign these effects to a layer of video, so whichever image I put in this layer the effects will be automatically added.
Thanks
Gareth

Set up your first insert, then select that effected clip and right-click.
Select 'Copy', then select all of the other clips to which you want to apply
the same basic 3D, bevel edge, size and position effect, right-click
and select 'Paste Attributes'.

Similar Messages

  • How to get more efficient After Effects and Mocha Integration?

    Hello All
    Is there a way to open a previously created Mocha AE project from AE? I am working on a shot that needed tracking and each time I stopped and then came back to the project, I had to open AE, open the AE project, click the clip in the composition, click animation, click track in Mocha AE and then cancel Mocha asking to start a new project and then click file, recent files and choose the appropriate Mocha project.
    This works but is really dumb.
    I also understand this is the bundled version and that Imagineer really wants me to buy the full version. If I get enough projects to justify it, I will. Currently the bundled version is fine. Just need a shot that AE couldn't handle stabilized.
    What would be even better is if AE saw the Mocha program as an effect. Click a clip and then apply Mocha AE Tracking from the effects and presets panel. And then be able to do the track in the Mocha interface and then using the effects controls tell it to apply the track as either stabilization, corner pin data or whatever from the AE interface when the clip is selected.
    Does anyone else think this would be a much better way of using Mocha? Am I the only one who feels this way? Was there something I missed in creating my project?

    Thanks for the reply but... I think maybe I might not have communicated my thoughts correctly.
    I am working on a video that is going to be 1/2 hour long. Part of my footage is from a handheld camera. One section of the footage needs to be stabilized. So I right click on the clip in Premiere and replace with an After Effects Composition.
    Now I tried to track just using AE but that was a wash. So I figured I would track it in Mocha. Then I spent at least 1/2 an hour looking for Mocha in my start menu and on my hard drives in places that programs usually reside. Then when I didn't find it I went to the Creative Cloud site (I am a subscriber) and checked to see why I was having a hard time finding it. Then I got a little panicked trying to find out if it was still bundled with CS6 as I couldn't find mention of it there. The Imagineer site said it was, but it was not in my start menu and I couldn't manually find the executable where I expected it to be.
    So then after a short Google search I found that it was on the animate menu in AE. (Whew )
    So I opened the AE project I had created for tracking the clip and clicked animate and saw that the track in Mocha AE was grayed out. (oh boy...) I immediately went to the edit menu and clicked preferences, general. I looked through the choices and tried to find where I was supposed to "turn on" Mocha. Couldn't find anything related.
    So I selected the clip in the composition timeline. Chose Animate and then the choice was available and I clicked Track in Mocha AE.
    Now Mocha asked to create a new project. Basically took the defaults and then I had to wait a while as Mocha cached the clip. Once the clip was cached I got started finding the relevant area of the footage and set up the x spline for tracking on the first frame. Then I saved so I could start again the next day without having to wait and went to bed happy.
    The next day when I was ready to start tracking that footage again I tried to find Mocha in my start menu, I figured that maybe it would show up now because I had used it once. Didn't so I found it in Program files tucked away in a secret folder in the Adobe folder sets and created a short cut and double clicked it. It told me that I had to start it from inside AE.
    So I opened the previous days AE Project and clicked the clip chose Animation and clicked track in Mocha. Mocha wanted to start a new project based on the clip I had to select in order to get the menu choice from inside of AE, so I canceled that and then Mocha opened and I chose recent files and the previous days project.
    And then I waited again as Mocha re-cached the clip. I made sure everything was set up correctly, looked at the spline layer and felt I was good to go so I hit the forward track button. The clip has a frame length of 21752 frames. At frame 9199 I had to stop working and do other things. I stopped the tracking and saved the project.
    The next day when I was ready to start tracking that footage again I opened the previous days AE Project and clicked the clip chose Animation and clicked track in Mocha. Mocha wanted to start a new project based on the clip I had to select in order to get the menu choice from inside of AE, so I canceled that and then Mocha opened and I chose recent files and the previous days project.
    And then I waited again as Mocha re-cached the clip. I made sure everything was the same as I had left it the previous day and then found the end of the previous days work and moved Mocha's CTI to the spot where I needed to track from. I hit the track forward button. It told me the footage was different or too dark from previous frames and that it couldn't continue. HUH?
    So I noticed that there was a different coloration on the frame timeline indicator. There was a blue section, where I had previously tracked and a red section, which I assumed meant it was untracked. I moved the CTI back into the blue section and hit track forward. Everything worked. Then I got to a section that had people obscuring the part of the shot that I was doing my tracking on. Time to create an exclusion x spline layer. I found the first frame where the people would become a problem and surrounded them. Everything was good even though it was taking 7 extra steps to get to the start of working on the clip each time. So I saved and went to bed.
    Should I bother to tell you what happened the next?
    The today when I was ready to start tracking that footage again I opened the previous days AE Project and clicked the clip chose Animation and clicked track in Mocha. Mocha wanted to start a new project based on the clip I had to select in order to get the menu choice from inside of AE, so I canceled that and then Mocha opened and I chose recent files and the previous days project.
    And then I waited again as Mocha re-cached the clip.
    This work flow is aggravating. I have to take extra steps, wait at least 5 minutes for caching each time and then wait for the actual track itself. I have used Mocha before and so I know how to use it, I know the different ways of getting the data to AE, but the problem is not the end of the workflow, it is getting to the start of the work each time I have to close the project(s) that is the problem.
    This "workflow" is making me want to see if there is a better way or another program that will help in correcting shaky footage. Yes I get back to exactly where I left off, but the point is the amount of work to it takes to just start working on my project again. I wonder how anyone can tolerate the process.
    Is there some magic menu choice or setting somewhere I am missing? Do you feel that this process is "normal"?

  • Creating a time channel in the data portal and filling it with data - Is there a more efficient way than this?

    I currently have a requirement to create a time channel in the data portal and subsequently fill it with data. I've shown below how I am currently doing it:
    Time_Ch = ChnAlloc("Time channel", 271214           , 1      ,           , "Time"         ,1                  ,1)              'Allocate time channel
    For intLoop = 1 to 271214
      ChD(intLoop,Time_Ch(0)) = CurrDateTimeReal          'Create time value
    Next
    I understand that the function to create and allocate memory for the time channel is extremely quick. However the time to store data in the channel afterwards is going to be highly dependent on the length I have assigned to the Time_Ch. In my application the length of Time_Ch is variable but could easily be in the order of 271214 or higher. Under such circumstances the time taken to fill Time_Ch is quite considerable. I am wondering whether this is the most appropriate way of doing things or whether there is a more efficient way of creating a time channel and filling it.
    Thanks very much for any help.
    Regards
    Matthew

    Hi Matthew,
    You are correct that there is a more efficient way to do this.  I'm a little confused about your "CurrDateTimeReal" assignment-- is this a constant?  Most people want a Time channel that counts up linearly in seconds or fractions of a second over the duration of the measurement.  But that looks like you would assign the same time value to all the rows of the new Time channel.
    If you want to create a "normal" Time channel that increases at a constant rate, you can use the ChnGenTime() function:
    ReturnValue = ChnGenTime(TimeChannel, GenTimeUnit, GenTimeXBeg, GenTimeXEnd, GenTimeStep, GenTimeMode, GenTimeNo)
    If you really do want a Time channel filled with all the same values, you can use the ChnLinGen() function and simply set the GenXBegin and GenXEnd parameters to be the same value:
    ReturnValue = ChnLinGen(TimeChannel, GenXBegin, GenXEnd, XNo, [GenXUnitPreset])
     In both cases you can use the Time channel you've already created (which as you say executes quickly) and point the output of these functions to that Time channel by using the Group/Channel syntax of the Time channel you created for the first TimeChannel parameter in either of the above functions.
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • Suggests for a more efficient query?

    I have a client (customer) that uses a 3rd party software to display graphs of their systems. The clients are constantly asking me (the DBA consultant) to fix the database so it runs faster. I've done as much tuning as I can on the database side. It's now time to address the application issues. The good news is my client is the 4th largest customer of this 3rd party software and the software company has listened and responded in the past to suggestions.
    All of the tables are setup the same with the first column being a DATE datatype and the remaining columns are values for different data points (data_col1, data_col2, etc.). Oh, that first date column is always named "timestamp" in LOWER case so got to use double quotes around that column name all of the time. Each table collects one record per minute per day per year. There are 4 database systems, about 150 tables per system, averaging 20 data columns per table. I did partition each table by month and added a local index on the "timestamp" column. That brought the full table scans down to full partition index scans.
    All of the SELECT queries look like the following with changes in the column name, table name and date ranges. (Yes, we will be addressing the issue of incorporating bind variables for the dates with the software provider.)
    Can anyone suggest a more efficient query? I've been trying some analytic function queries but haven't come up with the correct results yet.
    SELECT "timestamp" AS "timestamp", "DATA_COL1" AS "DATA_COL1"
    FROM "T_TABLE"
    WHERE "timestamp" >=
    (SELECT MIN("tb"."timestamp") AS "timestamp"
    FROM (SELECT MAX("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" <
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MIN("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" >=
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    AND "timestamp" <=
    (SELECT MAX("tb"."timestamp") AS "timestamp"
    FROM (SELECT MIN("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" >
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MAX("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" <=
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    ORDER BY "timestamp"
    Here are the queries for a sample table to test with:
    CREATE TABLE T_TABLE
    ( "timestamp" DATE,
    DATA_COL1 NUMBER
    INSERT INTO T_TABLE
    (SELECT TO_DATE('01/20/2006', 'MM/DD/YYYY') + (LEVEL-1) * 1/1440,
    LEVEL * 0.1
    FROM dual CONNECT BY 1=1
    AND LEVEL <= (TO_DATE('01/25/2006','MM/DD/YYYY') - TO_DATE('01/20/2006', 'MM/DD/YYYY'))*1440)
    Thanks.

    No need for analytic functions here (they’ll likely be slower).
    1. No need for UNION ... use UNION ALL.
    2. No need for <quote>WHERE NOT "timestamp" IS NULL</quote> … the MIN and MAX will take care of nulls.
    3. Ask if they really need the data sorted … the s/w with the graphs may do its own sorting
    … in which case take the ORDER BY out too.
    4. Make sure to have indexes on "timestamp".
    What you want to see for those innermost MAX/MIN subqueries are executions like:
    03:19:12 session_148> SELECT MAX(ts) AS ts
    03:19:14   2  FROM "T_TABLE"
    03:19:14   3  WHERE ts < TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS');
    TS
    21-jan-2006 00:12:00
    Execution Plan
       0   SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2.0013301108 Card=1 Bytes=9)
       1    0   SORT (AGGREGATE)
       2    1     FIRST ROW (Cost=2.0013301108 Card=1453 Bytes=13077)
       3    2       INDEX (RANGE SCAN (MIN/MAX))OF 'T_IDX' (INDEX) (Cost=2.0013301108 Card=1453 Bytes=13077)

  • Incredibly Long Lag Time When Working With Effects in Premiere Pro CC 2014

    I am having incredibly long lag times when working with effects in Premier Pro CC 2014.  Like 60 seconds or more.  This also happens periodically when I am doing something as simple as cutting into a timeline (just ripple deleting from and in to an out point). 
    I have a MacBook Pro running OSX 10.9.4 2 GHz Intel Core i7 with 16 GB 1600 MHz DDR3. 
    I have seen that some people are having different versions of the same problem.  I do have RedGiant plug-ins, but I am not using any of them in this project.  The Red Giant plugs (again, not being used in the project) include MOJO and Universe.  The previous version most certainly did not work like this.  Any ideas?  I did try moving the Red Giant plugs out of the COMMON folder and "alt" starting up to reset preferences.  Still having same problems.
    A specific example is I have an interview shot.  I add brightness and contrast.  Just making a change to the effect (increasing brightness) causes freezing of the interface and long delays.  I added a piece of transparent video on a top layer to add a CIRCLE to create a vignette effect.  Same deal -- ridiculously long times for effect changes to take place.
    Does anyone have any clues as to how to fix these problems? 
    THANKS FOR ANYONE'S HELP!!!!!!!

    Got the same Error. Previous I worked with a GTX 760, and was hoping to get it fixed with using a GTX 780 6GB. But just as you, didn't solve the Problem.
    Usually I got the Problem solved, with exiting Premiere, restarting it, reset my Workspace. Then the Problem of having the bugged Workspace was gone. And also the Problem of the missing Video Tracks.
    At the moment I started on a new Project. Pretty simple, with only a 21 second DV footage. And here I was switching tabs with my Browser, came back to the Project, and the Video Tracks were gone. I saved, closed Premiere, restated and the Video Tracks are still  missing.
    So I made a new Sequence, voilá the Video Tracks are there. Thus I need to copy and paste the Tracks onto the new Sequence.
    But that is not really a solution!
    I was on the Phone with Adobe, and on Chat, but they couldn't help me. They suggested to get a Tesla or Quadro Card, or at least a GPU that got confirmed as working on their page. I did that, but obviously, that is not the issue.

  • Why does my laptop give me a warning saying that firefox is using too much memory and to restart firefox to be more efficient? I just bought this laptop so I know it has the power to run what I need it to.

    Why does my laptop give me a warning saying that firefox is using too much memory and to restart firefox to be more efficient? I just bought this laptop so I know it has the power to run what I need it to.

    You appear to have AVG installed:
    *See --> http://forums.avg.com/ww-en/avg-forums?sec=thread&act=show&id=173969#post_173969
    From reading on the internet, it appears that when there is a spike in memory usage, AVG "interprets" that as a memory leak, possibly caused by malware. AVG could be incorrect concerning that assumption. Maybe they are being a bit too conservative about memory usage; just my opinion.
    The decision is yours to turn off the "advisor" or leave it on.
    '''If this reply solves your problem, please click "Solved It" next to this reply when <u>signed-in</u> to the forum.'''
    Not related to your question, but...
    You may need to update some plug-ins. Check your plug-ins and update as necessary:
    *Plug-in check --> http://www.mozilla.org/en-US/plugincheck/
    *Adobe Shockwave for Director Netscape plug-in: [https://support.mozilla.com/en-US/kb/Using%20the%20Shockwave%20plugin%20with%20Firefox#w_installing-shockwave Installing ('''''or Updating''''') the Shockwave plugin with Firefox]
    *'''''Adobe PDF Plug-In For Firefox and Netscape''''': [https://support.mozilla.com/en-US/kb/Using%20the%20Adobe%20Reader%20plugin%20with%20Firefox#w_installing-and-updating-adobe-reader Installing/Updating Adobe Reader in Firefox]
    *Shockwave Flash (Adobe Flash or Flash): [https://support.mozilla.com/en-US/kb/Managing%20the%20Flash%20plugin#w_updating-flash Updating Flash in Firefox]
    *'''''Next Generation Java Plug-in for Mozilla browsers''''': [https://support.mozilla.com/en-US/kb/Using%20the%20Java%20plugin%20with%20Firefox#w_installing-or-updating-java Installing or Updating Java in Firefox]

  • Problem with effectiveness of filters

    I have at several occasions run into cases were the built in "query optimizer" don’t apply filters in the best order from a performance point of view when dealing with multiple filters joined together using AND-filters.
    As a work-around to the problem I have sub classed the built-in filters adding an extra constructor that allow setting the effectiveness as a hard coded integer value that (if specified) is returned instead of the calculated value. This way I was able to force an AND filter to pick the filter I know (from application knowledge) is most likely the most efficient first.
    Would it be possible to get this kind of constructor added to the standard filter classes?
    I would also like to know how a few things about how indexes are used by Coherence:
    1.     How does the various built in filters calculate there effectiveness? Do coherence for instance keep some statistics about how many unique values an index contain or how does it decide what index that is more effective than another?
    2.     Can Coherence use more than one index at the same time (i.e. merge indexes)? From my experience it seems like only one index is used fully, other indexes are only used to avoid de-serialization when performing a linear search of the hits from the first index. As far as I know only the most advanced RDBM are able to use more than one index (instead of only performing linear search from the result of one index) so I am not really surprised if Coherence do the same...
    3.     Does making an index sorted in any way improve the performance if the index is used as the second or third index applied in a query (one could envision that some sort of binary search could be used instead of linear search of the hits from the first index used if a sorted index is available)?
    /Magnus

    MagnusE wrote:
    I have at several occasions run into cases were the built in "query optimizer" don’t apply filters in the best order from a performance point of view when dealing with multiple filters joined together using AND-filters.
    As a work-around to the problem I have sub classed the built-in filters adding an extra constructor that allow setting the effectiveness as a hard coded integer value that (if specified) is returned instead of the calculated value. This way I was able to force an AND filter to pick the filter I know (from application knowledge) is most likely the most efficient first.
    Would it be possible to get this kind of constructor added to the standard filter classes?
    You can simply create a wrapper filter which implements IndexAwareFilter and wraps another IndexAwareFilter. The calculateEffectiveness should be implemented as you want it, and you can delegate applyIndex() and evaluateEntry() to the wrapped filter.
    MagnusE wrote:
    I would also like to know how a few things about how indexes are used by Coherence:
    1.     How does the various built in filters calculate there effectiveness?I believe they return the number of reverse index accesses as effectiveness if the index exists, and maybe the size of the candidate set if it does not exist.
    MagnusE wrote:
    Do coherence for instance keep some statistics about how many unique values an index contain or how does it decide what index that is more effective than another?Since the reverse index is just a map between extracted values and a collection of keys, practically this information is available from MapIndex.getIndexContents().size(), if I correctly understand your question, where you can get the MapIndex for an extractor with mapIndexes.get(extractor).
    MagnusE wrote:
    2.     Can Coherence use more than one index at the same time (i.e. merge indexes)? From my experience it seems like only one index is used fully, other indexes are only used to avoid de-serialization when performing a linear search of the hits from the first index. As far as I know only the most advanced RDBM are able to use more than one index (instead of only performing linear search from the result of one index) so I am not really surprised if Coherence do the same...Yes, any number of indexes can be used, but the stock filters use only a single index (as you can specify a single extractor to the filter). Of course if you form a logical expression between two filters using different extractors, indexes to both extractors will be used if they exist. The applyIndex() and calculateEffectiveness() methods receive all indexes in the mapIndexes parameter, so your custom index-aware filter can use any number of existing indexes at the same time.
    MagnusE wrote:
    3.     Does making an index sorted in any way improve the performance if the index is used as the second or third index applied in a query (one could envision that some sort of binary search could be used instead of linear search of the hits from the first index used if a sorted index is available)?If you need range queries on the extracted value, sorted index can help a great deal as you don't have to iterate all the keys in the reverse index. This is independent of what other filters you use in your logical expression, the evaluation of the subexpression on the sorted index will still be more efficient than if the index was unsorted.
    Best regards,
    Robert

  • XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD

    HI
    XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
    The sql has 5 union clauses.
    It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
    The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
    Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
    Thanks in advance for your help.

    But the question is that how come it is working in TOAD and takes only 15-20 minutes?
    with initialization of session ?
    what about sqlplus ?
    Do I have to set up the the temp directory for the XML Publisher report to make it faster?
    look at
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
    BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1)

  • How can I create a video with effects using my ipad?

    How can I create a video with effects (sepia, B&W, Negative, oval or any other shape borders) using my ipad?  I would Like to keep a higher res and also be able to shrink it down to email or send in a MMS. Or should I just upload it to my PC and mess with it there? Some of the apps I have are very easy to use compared to the learning curve needed for video editing software.
    Thanks

    Thats the Problem . . . how many apps do I have to try until I find the one I want? I noticed a few will render the video thus losing its original size. When I went to edit it more in iMovie the video was smaller--not good. And what software do you suggest, Templeton, for the PC? I love the apps because they are easy. I dont have hours to mess around on a software to figure out if its something I want. Im looking for simplicity. Maybe Ill get more into it later. I just want to record simple video of my playing the guitar for self analysis and create a short video for family and friends.
    Apps:
    iMovie
    CinemaFXV
    VideoFix
    Cartoonatic
    Video illusion
    VidEditor
    Software:
    Windows Movie maker (wont accept .mov files)
    After Effects (Too little time, so much to learn)
    Wondershare (Very easy but little choices)
    VideoPad (Again. Few choices)

  • Linking from one PDF to another: Is there a more efficient way?

    Some background first:
    We make a large catalog (400pages) in Indesign and it's updated every year. We are a wholesale distributor and our pricing changes so we also make a price list with price ref # that corresponded with #s printed in the main catalogue.  Last year we also made this catalog interactive so that a pdf of it could be browsed using links and bookmarks. This is not too difficult using Indesign and making any adjustments in the exported PDF. Here is the part that becomes tedious and is especially so this year:
    We also set up links in the main catalog that go to the price list pdf - opening the page with the item's price ref # and prices... Here's my biggest issue - I have not found any way to do this except making links one at a time in Acrobat Pro (and setting various specifications like focus and action and which page (in the price list) to open) Last year this wasn't too bad because we used only one price list. It still took some time to go through and set up 400-500 links individually.
    This year we've simplified our linking a little by putting only one link per page but that is still 400 links. And this year I have 6 different price lists (price tiers...) to link to the main catalogue pdf. (That's in the neighborhood of 1200-1500 double clicking the link(button) to open Button Properties, click Actions tab, click Add..."Go to page view" , set link to other pdf page, click edit, change Open in to "New Window" and set Zoom.  This isn't a big deal if you only have a few Next, Previous, Home kind of buttons....but it's huge when you have hundreds of links. Surely there's a better way?
    Is there anyway in Acrobat or Indesign to more efficiently create and edit hundreds of links from one pdf to another?
    If anything is unclear and my question doesn't make sense please ask. I will do my best to help you answer my questions.
    Thanks

    George, I looked at the article talking about the fdf files and it sounds interesting. I've gathered that I could manipulate the pdf links by making an fdf file and importing that into the PDF, correct?
    Now, I wondered - can I export an fdf from the current pdf and then change what is in there and import it back into the pdf.  I've tried this (Forms>More Form Options>Manage Form Data>Export Data) and then opened the fdf in a text editor but I see nothing related to the documents links... I assume this is because the links are 'form' data to begin with - but is there away to export something with link data like that described in the article link you provided?
    Thanks

  • More-efficient keyboard

    Anyone have a way to get a better onscreen keyboard going than the QWERTY one that is stock on the iPhone? I love the FITALY (fitaly.com) keyboard on my old Palm, but that company says Apple won't let them do a system keyboard. One would like to think that a Dvorak or other non-18th century keyboard could be available among the international keyboards, but I don't find one.
    And yes, I've already tried the alternatives, e.g., TikiNotes, which takes me three times as long to type on than the Qwerty.
    If not, this would be a great suggestion for a new feature for Apple to incorporate.

    Just to let everybody know I am now 28 years old. I first learned how to type in typing class in junior high. I was using the Qwerty layout. The only nice thing I can say about the Qwerty layout is that it's available at any computer you want to use without any configuration.
    Then I looked online for a better way to type and more efficient. That's when I learned about Dvorak keyboard layout. This was about four years ago. I stuck with it for about two years. I felt my right hand was doing a lot more typing than my left hand. It felt too lopsided for me. But that's just my opinion. I went on the hunt for something better than Dvorak and I found the glorious Colemak keyboard layout.
    I have been typing with it ever since. My hands are a lot more comfortable and I can type faster now. It took me a month to actually get comfortable with the keyboard layout. If you actually go to this Java applet on Colemak's website.
    www.colemak.com/Compare
    You can just copy and paste a body of text and click on Calculate it will analyze the typing and compare the three different keyboard layouts. I just hope it becomes an ANSI standard like Dvorak has. I hope that happens in the future.
    I just want everybody to know there is a third option out there and its great. If ever Colemak goes away I will be going back to Dvorak. I will never learn the Qwerty keyboard layout ever again.
    Just wanted to give my two cents worth.

  • GL Trial Balance Report with Effective Dates as Parameters

    We have a requirement to show the GL Trial Balance report with Effective dates as Parameters.
    Current Analysis:
    The Journals get updated with corresponding CCID in GL_BALANCES table when the Journal is posted. GL_BALANCE is SOB specific, if the SOB has month as period then the balances in GL_BALANCES would get updated against the month(period).
    To overcome the period problem, we explored the option of using a View based on GL_JE_HEADERS and GL_JE_LINES for 'Posted' Journal Batches of a SOB. We are checking whether the GL_JE_HEADERS.default_effective_date lies between the :p_from_date and :p_to_date which is sent to the Report as a parameter. The above idea does not return expected data when the custom Trial Balance Report is run.
    Following is the Query being used:
    SELECT cc.segment4 ACCOUNT, bal.code_combination_id,
    bal.begin_balance_dr
    + SUM (NVL (gljel.accounted_dr, 0)) opening_bal_dr,
    bal.begin_balance_cr
    + SUM (NVL (gljel.accounted_cr, 0)) opening_bal_cr,
    ffv.description,
    (SELECT SUM (NVL (gljel.accounted_dr, 0))
    FROM gl_je_headers gljeh,
    gl_je_lines gljel,
    gl_code_combinations gcc
    WHERE gljeh.default_effective_date BETWEEN :p_from_date
    AND :p_to_date
    AND gljeh.je_header_id = gljel.je_header_id
    AND gljel.code_combination_id = gcc.code_combination_id
    AND gljel.period_name = gljeh.period_name
    AND gljel.set_of_books_id = :p_set_of_books_id
    AND gljeh.status = 'P'
    AND gljel.status = 'P'
    AND gljeh.actual_flag = 'A'
    --AND gljel.code_combination_id =
    -- bal.code_combination_id
    AND gcc.segment4 = cc.segment4
    GROUP BY gcc.segment4) c_dr,
    (SELECT SUM (NVL (gljel.accounted_cr, 0))
    FROM gl_je_headers gljeh,
    gl_je_lines gljel,
    gl_code_combinations gcc
    WHERE gljeh.default_effective_date BETWEEN :p_from_date
    AND :p_to_date
    AND gljeh.je_header_id = gljel.je_header_id
    AND gljel.period_name = gljeh.period_name
    AND gljel.code_combination_id = gcc.code_combination_id
    AND gljel.set_of_books_id = :p_set_of_books_id
    AND gljeh.status = 'P'
    AND gljel.status = 'P'
    AND gljeh.actual_flag = 'A'
    AND gcc.segment4 = cc.segment4
    GROUP BY gcc.segment4) c_cr
    FROM gl_period_statuses per,
    gl_code_combinations cc,
    gl_balances bal,
    gl_je_headers gljeh,
    gl_je_lines gljel,
    fnd_flex_values_vl ffv,
    fnd_flex_value_sets ffvs
    WHERE cc.chart_of_accounts_id = :p_chart_of_accts_id
    AND bal.currency_code = :p_currency
    AND bal.actual_flag = 'A'
    AND bal.period_name = per.period_name
    AND cc.template_id IS NULL
    AND cc.code_combination_id = bal.code_combination_id
    AND per.set_of_books_id = :p_set_of_books_id
    AND per.application_id = 101
    AND :p_from_date BETWEEN per.start_date AND per.end_date
    AND gljeh.period_name = per.period_name
    AND gljeh.default_effective_date <= :p_from_date
    AND gljeh.je_header_id = gljel.je_header_id
    AND gljel.period_name = gljeh.period_name
    AND gljel.set_of_books_id = :p_set_of_books_id
    AND ffv.flex_value_set_id = ffvs.flex_value_set_id
    AND ffvs.flex_value_set_name = 'JSWEL_ACCOUNT'
    AND gljeh.status = 'P'
    AND gljel.status = 'P'
    AND cc.summary_flag = ffv.summary_flag
    AND cc.segment4 = ffv.flex_value
    AND gljeh.actual_flag = 'A'
    AND gljel.code_combination_id = bal.code_combination_id
    GROUP BY bal.begin_balance_dr,
    bal.begin_balance_cr,
    cc.segment4,
    ffv.description,
    bal.code_combination_id
    Kindly suggest if I am missing anything. I am sure that the great guns here can help me out.
    Thanks
    Sumit

    suggest to create customize TB report.

  • I installed lv 6.1 on a windows 98 system. As I am having problems of stability (much more than with the former release 6.0), I would like to know if there is some specific problem with windows 98 or there is some patch.

    I installed lv 6.1 on a windows 98 system. As I am having problems of stability (frequent crashes, much more than with the former release 6.0), I would like to know if there is some specific problem with windows 98 or there is some patch available.

    My experience with Win98 is that it is not a very stable system, regardless of software used. For example, Win2000 and XP are far more stable than 98. I've had it crash on its own if I leave the computer on for several days.
    I wouldn't recommend running programs for long time (few days) on this OS.
    This being said, can you be more specific in your question. What kind of stability problems did you have, which VIs did you run (if possible post them here), did you change the interrupts and priority levels on those VI, do you get error messages or blue screen....
    Zvezdana S.

  • More efficient way to extract number from string

    Hello guys,
    I am using this Regexp to extract numbers from a string, and I doubt that there is a more efficient way to get this done:
    SELECT  regexp_replace (regexp_replace ( REGEXp_REPLACE ('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ' , '[[:punct:]]',''), '[[:space:]]',''), '[[:alpha:]]','')  FROM dual
    {code}
    Is there a more efficient way to get this done ?
    Regards,
    Fateh                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Or, with less writing, using Perl syntax \D (non-digit):
    SELECT  regexp_replace('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ','\D')
      FROM  dual
    REGEXP_REPLACE(
    003240008702234
    SQL>
    {code}
    SY.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for