Paging and Non Destructive Filter

Hi all,
I am trying to create a list screen for a small application I
am working on and I've hit a problem.
Originally my list was to show just 10 results at a time and
for this I used the code from the "Paging Sample" and managed to
get this working with next and prev buttons.
All fine at this stage.
However, I now need to be able to filter the list based on
criteria enetered into a text feild by the user and the "Non
Destructive Filter Sample" provided a good solution.
I implemented the functions and it sort of works. When the
list first loads the paging functions filter this full list to the
10 results I require and pressing the next button shows the next
10. The non destructive filter also works in that it will filter
the list based on user input.
The problem that I have is that if a user keys in a some
criteria that makes the non destructive fiter show more than 10
results (say 30 results) they all show and the paging functions are
no longer used.
I have tried in vain to apply the paging functions to the non
destructive filter funtions so that when they return the rows it
will limit the results to 10 at a time.
Has anyone else come accross this? Does anyone have any
advice or sample code that may help.
Thanks in advance.
T12

Hi T12,
Checkout this sample:
http://labs.adobe.com/technologies/spry/samples/data_region/SpryPagedViewSample.html
It's a preview of a paging approach we're playing with. The
idea is that you use the paged view data set for displaying the
data, but you use the original data set to do all your filtering,
sorting, etc.
--== Kin ==--

Similar Messages

  • Using Non-destructive filter with Nested XML data

    Hi,
    How do you use Non-destructive filter with Nested XML data?
    I am using the non-destructive filter sample with my own xml which is setup to search for the <smc></smcs> in my xml below. But when i test it it only searches the last row of the "smc". How can i make it work so it can search for repeating nodes? or does it have something to with how my xml is setup?
        <ja>
            <url>www.sample.com</url>
            <jrole>Jobrole goes here</jrole>
            <prole>Process role goes here...</prole>
            <role>description...</role>
            <prole>Process role goes here...</prole>
            <role>description....</role>
            <prole>Process role goes here...</prole>
            <role>description...</role>
            <sjc>6K8C</sjc>
            <sjc>6B1B</sjc>
            <sjc>6B1F</sjc>
            <sjc>6B1D</sjc>
            <smc>6C9</smc>
            <smc>675</smc>
            <smc>62R</smc>
            <smc>62P</smc>
            <smc>602</smc>
            <smc>622</smc>
            <smc>642</smc>
            <smc>65F</smc>
            <smc>65J</smc>
            <smc>65L</smc>
            <smc>623</smc>
            <smc>625</smc>
            <smc>624</smc>
            <smc>622</smc>
            <audience>Target audience goes here....</audience>
        </ja>
    here is the javascript that runs it.
    function FilterData()
        var tf = document.getElementById("filterTF");
        if (!tf.value)
            // If the text field is empty, remove any filter
            // that is set on the data set.
            ds1.filter(null);
            return;
        // Set a filter on the data set that matches any row
        // that begins with the string in the text field.
        var regExpStr = tf.value;
    if (!document.getElementById("containsCB").checked)
            regExpStr = "^" + regExpStr;
        var regExp = new RegExp(regExpStr, "i");
        var filterFunc = function(ds, row, rowNumber)
            var str = row["smc"];
            if (str && str.search(regExp) != -1)
            return row;
            return null;
        ds1.filter(filterFunc);
    function StartFilterTimer()
        if (StartFilterTimer.timerID)
            clearTimeout(StartFilterTimer.timerID);
        StartFilterTimer.timerID = setTimeout(function() { StartFilterTimer.timerID = null; FilterData(); }, 100);
    I really need help on this, or are there any other suggestions or samples that might work?
    thank you!

    I apologize, im using Spry XML Data Set. i guess the best way to describe what im trying to do is, i want to use the Non-desctructive filter sample with the Spry Nested XML Data sample. So with my sample xml on my first post and with the same code non-destructive filter is using, im having trouble trying to search repeating nodes, for some reason it only searches the last node of the repeating nodes. Does that make sense? let me know.
    thank you Arnout!

  • I Dislike the Terms "Destructive" and "Non-Destructive" Editing

    Some folks in the Photoshop realm use the terms "destructive" and "non-destructive" to describe ways of using Photoshop in which transforms are applied directly to pixel values vs. being applied via layers or smart filters or smart objects or other means.
    Do you realize that the term "destructive" is actually mildly offensive to those who know what they're doing and choose to alter their pixel values on purpose?
    I understand that teaching new people to use Photoshop in a way that doesn't "destroy" their original image data is generally a good thing, and I'm willing to overlook the use of the term as long as you don't confront me and tell me what I'm doing when I choose to alter pixel values is "wrong" (or when I choose to advise others on doing so).
    For that people who claim editing pixel values is "destructive", I offer this one response, which is generally valuable advice, in return:
    Never overwrite your original file.
    There.  The "destruction" has ceased utterly.
    It's common sense, really.  You might want to use that file for something else in the future.
    If you shoot in raw mode with a digital camera, then you actually can't overwrite your raw files.  That's a handy side effect, though some don't use raw mode or even start working with digital photographs.
    In any case, when you open your image consider getting in the habit of immediately doing File - Save As and creating a .psd or .tif elsewhere, so that you can subsequently do File - Save to save your intermediate results.
    There can actually be many advantages to altering pixel values, if you know what you're doing and choose to do so.  But sometimes even the most adept Photoshop user might find that a given step created a monster; that's okay, there's a multi-step History palette for going back.  I normally set mine to keep a deep history, to give me a safety net if I DO do something wrong, though I tend to use it rarely.
    And for those who would tout the disadvantages to editing "destructively", there can be huge disadvantages to doing it "non-destructively" as well...  Accumulating a large number of layers slows things down and can use a lot of RAM...  With downsized zooms the mixing can yield posterization that isn't really there, or gee whiz, just TRY finding a computer fast enough to use smart filters in a meaningful way.  Just the concept of layers, if one hasn't worked out how layer data is combined in one's own mind, can be daunting to a new person!
    So I ask that you please stop saying that the "only" or "best" way to use Photoshop is to edit "non-destructively".  There are folks who feel that is offensive and arrogant.  I think the one thing everyone can agree upon is that THERE IS NO ONE OR BEST WAY TO USE PHOTOSHOP!
    You go ahead and do your editing your way.  I prefer to do "constructive" editing. 
    Thanks for listening to my rant.
    -Noel
    Man who say it cannot be done should not interrupt man doing it.

    function(){return A.apply(null,[this].concat($A(arguments)))}
    Aegis Kleais wrote:
    When you alter image data in a manner that cannot be reverted, you have destroyed it.
    Really?
    That's one of those things that one is not supposed to question.  It just sounds so right!
    Problem is, it's insufficient in and of itself, and misleading...  It's a rule of thumb that's way too general.
    What IS "data" anyway?  Arrangement of magnetic spots on a disk?  My disk is still whole, so we're not talking about physical destruction here.
    One could argue that all the data is all still there in many cases of pixel-value-change editing (e.g., where there has been no resizing).  The image file is the same size!  Same amount of data.
    Upsampling, or making a copy of an image is actually creating more data, not destroying data.  Thus there is no general "destruction", but the terms "construction" or "creation" could be used.
    But wait, perhaps you're really talking about destroying information, not data...  Well...
    As it turns out the term "destructive" is still off base.  I have altered the information, possibly even adding important information.  If I make a copy this is a no brainer.  Even if I don't, depending on a person's skill in editing, the altered result could still carry all the original information that was important plus information added by editing, and be quite possibly better for its intended purpose (human consumption) than the image before the edit.  That's the goal!
    So now we're talking about important information vs. unimportant information.  And of course we're talking about fitness for a future purpose.
    As with anything, there are multiple ways to get there and multiple ways to interpret the words.
    The term "destructive" in my opinion was invented to further someone's agenda.
    -Noel

  • Destructive and non-destructive buffer reads on branch wires

    I was asked this morning what are "Destructive and non-destructive buffer reads on branched wires" are.
    I was at a loss at first and the I thought some and read a posting by Jim that inspried me.
    My first thought was the case of an array wired to a replace array element and an index array function. I that case the the index has to execute before the replace array element because the replace re-uses its input buffer. Not really detructive but seemed close.
    There is also the case of a buffer (like a string) being wired into a CIN and the same buffer being re-used be the CIN to return the result. The CIN "destroys" the original values.
    Jim mentioned the concatenate string having to destroy the buffer
    s hold the strings that are concantenated and moved to another buffer.
    Also when I do a AI read the buffer I read from is destroyed after I read it. Similarly with reading from VISA and the like.
    What does the above phrase really mean and are there other examples that I have missed?
    Trying to learn something here,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

    That phrase may have been describing a combination of factors. Documentation from NI points out that if a wire branches, LabVIEW will try to avoid making a copy of the data, including scheduling (as in your example) read operations on branch A before write operations on branch B if possible. Since branch A doesn't tamper with the data, no copy is necessary.
    The author of a CIN or DLL could indeed choose to create his function in such a manner that it uses the same physical buffer in memory for input and output. This is by no means the only way to return values from a function, but it's quite common for efficiency reasons. LabVIEW might even assume that a CIN or DLL always overwrites any buffer passed into it (and may therefore make a copy if a wire branche
    s both to a CIN and somewhere else).
    So to me, the phrase you cited connotes the G compiler's decision of whether or not a wire branch modifies the data (e.g., string concatenation, removing elements from arrays) or just reads it (e.g., an index or input to a calculation).

  • Multiple Non-Destructive Filter Paged View

    Example - non-working See
    code below
    I would like to apply the multiple filter example to data
    loaded in a paged view.
    So far everything loads and the form controls all work, but
    independently.
    Using the States/Cities data, theoretically a user should be
    able to select the state > show cities that start with Q-Z >
    then search which cities contain X.
    OR
    Select State > Search Cities that contain X > remove
    cities that start with A-H.
    Any thoughts? Any further details need explain?
    My final goal will be to add a third data source such as
    population, where a user could filter out the cities according to
    population size instead of the city names.
    Apologies if this has been addressed in another thread, where
    can I find it?
    I also cannot find documentation on SpryDataExtensions.
    <---Code--->
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
    Transitional//EN" "
    http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="
    http://www.w3.org/1999/xhtml"
    xmlns:spry="
    http://ns.adobe.com/spry">
    <head>
    <meta http-equiv="Content-Type" content="text/html;
    charset=iso-8859-1" />
    <title>Spry.Data.PagedView Sample</title>
    <link href="../../css/samples.css" rel="stylesheet"
    type="text/css" />
    <style type="text/css">
    .select {
    background-color: black;
    color: white;
    .hover {
    background-color: #FFCC66;
    color: black;
    .currentPage {
    font-weight: bold;
    color: red;
    </style>
    <script language="JavaScript" type="text/javascript"
    src="xpath.js"></script>
    <script language="JavaScript" type="text/javascript"
    src="SpryData.js"></script>
    <script language="JavaScript" type="text/javascript"
    src="SpryDataExtensions.js"></script>
    <script language="JavaScript" type="text/javascript"
    src="SpryPagedView.js"></script>
    <script language="JavaScript" type="text/javascript">
    <!--
    var dsStates = new Spry.Data.XMLDataSet("states/states.xml",
    "/states/state");
    var dsCities = new
    Spry.Data.XMLDataSet("states/{dsStates::url}",
    "/state/cities/city");
    // Create a PagedView that will manage the paging of the data
    that is loaded
    // into dsCities.
    var pvCities = new Spry.Data.PagedView(dsCities, { pageSize:
    20 });
    var pvCitiesPagedInfo = pvCities.getPagingInfo();
    // FilterData() and StartFilterTimer() are not required for
    paging at all. They are
    // here only to support the filtering function used within
    this sample to show that
    // the PagedView automatically adjusts paging as the data in
    the data set it depends
    // on changes dynamically.
    function FilterData()
    var tf = document.getElementById("filterTF");
    if (!tf.value)
    // If the text field is empty, remove any filter
    // that is set on the data set.
    dsCities.filter(null);
    return;
    // Set a filter on the data set that matches any row
    // that begins with the string in the text field.
    var regExpStr = tf.value;
    if (!document.getElementById("containsCB").checked)
    regExpStr = "^" + regExpStr;
    var regExp = new RegExp(regExpStr, "i");
    var filterFunc = function(ds, row, rowNumber)
    var str = row["name"];
    if (str && str.search(regExp) != -1)
    return row;
    return null;
    dsCities.filter(filterFunc);
    function StartFilterTimer()
    if (StartFilterTimer.timerID)
    clearTimeout(StartFilterTimer.timerID);
    StartFilterTimer.timerID = setTimeout(function() {
    StartFilterTimer.timerID = null; FilterData(); }, 100);
    function ffAH(ds, row, index){ var c = row.name.charAt(0);
    return c >= 'A' && c <= 'H' ? null : row; };
    function ffIP(ds, row, index){ var c = row.name.charAt(0);
    return c >= 'I' && c <= 'P' ? null : row; };
    function ffQZ(ds, row, index){ var c = row.name.charAt(0);
    return c >= 'Q' && c <= 'Z' ? null : row; };
    function ToggleFilter(enable, f)
    if (enable)
    dsCities.addFilter(f, true);
    else
    dsCities.removeFilter(f, true);
    function RemoveAllFilters()
    document.forms[0]["fAH"].checked = false;
    document.forms[0]["fIP"].checked = false;
    document.forms[0]["fQZ"].checked = false;
    dsCities.removeAllFilters(true);
    -->
    </script>
    </head>
    <body>
    <!-- BEGIN Data loading and filtering controls. -->
    <div> Choose a State: <span spry:region="dsStates"
    id="stateSelector">
    <select spry:repeatchildren="dsStates" name="stateSelect"
    onchange="dsStates.setCurrentRowNumber(this.selectedIndex);">
    <option spry:if="{ds_RowNumber} == {ds_CurrentRowNumber}"
    value="{name}" selected="selected">{name}</option>
    <option spry:if="{ds_RowNumber} != {ds_CurrentRowNumber}"
    value="{name}">{name}</option>
    </select>
    </span> Enter the Name of a City:
    <input type="text" id="filterTF"
    onkeyup="StartFilterTimer();" />
    Contains:
    <input type="checkbox" id="containsCB" checked="checked"
    onchange="FilterData();" />
    </div>
    <!-- END Data loading and filtering controls. -->
    <div class="liveSample" >
    <form action="">
    <label>Filter out 'A' - 'H':
    <input name="fAH" type="checkbox" value=""
    onclick="ToggleFilter(this.checked, ffAH);" />
    </label>
    <label>Filter out 'I' - 'P':
    <input name="fIP" type="checkbox" value=""
    onclick="ToggleFilter(this.checked, ffIP);" />
    </label>
    <label>Filter out 'Q' - 'Z':
    <input name="fQZ" type="checkbox" value=""
    onclick="ToggleFilter(this.checked, ffQZ);" />
    </label>
    <input type="button" value="Remove All Filters"
    onclick="RemoveAllFilters();" />
    </form>
    </div>
    <!-- BEGIN PagedView Controls -->
    <p spry:region="pvCitiesPagedInfo"
    spry:repeatchildren="pvCitiesPagedInfo"> <a
    spry:if="{ds_CurrentRowNumber} != {ds_RowNumber}" href="#"
    onclick="pvCities.goToPage('{ds_PageNumber}'); return
    false;">{ds_PageFirstItemNumber}-{ds_PageLastItemNumber}</a>
    <span spry:if="{ds_CurrentRowNumber} == {ds_RowNumber}"
    class="currentPage">{ds_PageFirstItemNumber}-{ds_PageLastItemNumber}</span>
    </p>
    <!-- END PagedView Controls -->
    <!-- BEGIN PagedView Info Section -->
    <div spry:region="pvCities">
    <p spry:if="{ds_UnfilteredRowCount} &gt; 0">Page
    {ds_PageNumber} of {ds_PageCount} - Items {ds_PageFirstItemNumber}
    - {ds_PageLastItemNumber} of {ds_UnfilteredRowCount}</p>
    <p spry:if="{ds_UnfilteredRowCount} == 0">No matching
    data found!</p>
    </div>
    <!-- END PagedView Info Section -->
    <!-- BEGIN Paged Display Section -->
    <div spry:region="pvCities dsCities">
    <ul spry:repeatchildren="pvCities"
    spry:choose="choose">
    <li spry:when="{pvCities::ds_RowID} ==
    {dsCities::ds_CurrentRowID}" spry:select="select"
    spry:selectgroup="page" spry:selected="selected" spry:hover="hover"
    spry:setrow="pvCities">{pvCities::name}</li>
    <li spry:default="default" spry:select="select"
    spry:selectgroup="page" spry:hover="hover"
    spry:setrow="pvCities">{pvCities::name}</li>
    </ul>
    </div>
    <!-- END Paged Display Section -->
    </body>
    </html>

    Hi,
    Im using the same sample, How can you make so it can also search for xml attribute values?
    Like for example the ABC in title? <role title="ABC">Example</role>
    thank you!

  • View entire dataset after non destructive sorting.

    So I have a page that calls a large number of records and I allow the user to narrow the choices down using a standard non destructive filter in a text box. I return the results in a repeating tr  table. The recordset contains maybe 15 datapoints be row. I only show them about 6 on this page since the rest is not that meaningful or wouldnt be searchable. Howver, once the user sorts this data down I would like them to see the entire filtered dataset in a table or something for exporting to excel. How do I access that newly filtered dataset in its entirety? Do I have to write it out like I am doing this visible table or is there some secret weapon for me to get this newly sorted stuff?
    Thanks.
    Jon

    A URL to your page will be helpful.
    Gramps

  • Destructive vs. non-destructive editing

    If I send a sequence from FCP to a multitrack project in STP it will be a non destructive edit. Once I am in STP I right click on a track and select "Open in Editor" to take out clicks/pops, etc. Does opening this track in the editor from STP now change the edit to a destructive edit? I want to edit everything from my FCP project non-destructively in STP. Does creating a multitrack project in STP and THEN opening a track in the editor in STP change this to a destructive edit or is it still non-destructive?

    Hi Brian:
    I read the manual to see the difference between destructive and non-destructive editing in STP ...
    What is STP?
    This forum refers to DVD Studio Pro, the app to author DVDs.
    FCP (Final Cut Pro is a non-destructive video edition software.
    As far as I know iMovie (Apple entry level video edition software) is a destructive editing tool . . . no matter you can use some tricks to avoid source "destruction" (... I have not used it from some time ago).
    Please, clarify your post and you'll get an answer for your problem for sure.

  • How do I non-destructively sharpen, re-size and save my images if I'm using both LR & CS6?

    Hi guys {and gals}... 
    Ok... here is my dilemma. I am having an incredibly difficult time understanding the best way to sharpen, re-size and save my images for both posting on the web and giving them to clients. I completed my first paid photo shoot (yay!), but as I finished editing each image, I re-sized it and posted it on my FB photography page. I later learned from a fellow at my local print shop that this is a destructive and irreversible edit (not yay! ).
    So...  before I pull out every last strand of hair on my head, I REAALLLYYYY need to get a good grasp on how to do the following things so that I can establish a good workflow: 1. Sharpen my image well {w/ Smart Sharpen}. Does this have to be done on a flattened image... and isn't flattening irreversible?  2. Re-sizing my images for both web display and client work/printing. Is it true that once I set it to 72ppi for web display, that I lose a great deal of the detail and quality? Do I need to create a copy of the file and have 2 different image sizes?
    I am self taught, learning off the cuff through tutorials and constant error... and I just want so badly to have a smooth and beneficial work flow in place.
    Currently, my workflow is as follows...  1. Load images into LR and convert to DNG files  2. Quick initial edit & then send into PS CS6  3. Perform detailed/layered edit(s)  4. {I know I'm supposed to sharpen now, as the last step, but am afraid to permanently flatten my image in case I want to tweak the layers later..}  5. Save the file (unflattened)  6. Go back into LR and Export the file to the appropriate place on my hard drive
    So... at this point, my image is still at 300ppi {not appropriate for web display}, unflattened {I'm told flattened images are ideal for client work and printing} and not as sharp as I want it to be {because I don't know when to apply Smart Sharpen filter}.
    HELP!!!!!!! 
    Thanks in adavnce for "listening" to me ramble...
    ~ Devon

    There are a lot smarter guys on this forum than I so will let them give you ideas on the sharpen workflow.
    Is DNG the same as RAW in that all the edits are non-destructive?  With RAW all the edits are put on a separate XMP file and believe with DNG the XMP file is written to the image.  In this case would suggest you save the DNG then create a jpg to send to clients or on web.  A jpg will not save layers so it is by its nature flattened.
    Since you are new to this try this test to understand ppi.  Click on Image/image size. 
          Change Document size to inches. 
          Now uncheck "unsample image" as if this is checked all the pixels will be modified to adjust to the new size.  Unchecked no pixels will be changed.
          Now adjust the resolution from 72 to 300 ppi (pixels per inch).  Note that the Image Size in pixels does not change, but the document size changes.  This means resolution is unchanged.
          Now click "resample image" and change the resolution.  Note how the image size changes and document size stays the same.
    Bottom line quality of picture is the image size in pixels.  THe larger the numbers the higher the quality.

  • Filter Data with Merged and non merged columns in one

    Hi there,
    I have an excel spreadsheet that has got merged and non merged columns. What I want to be able to do is, filter a row that has got merged column and non merged columns. But when I filter it only takes the first line, rather than the merged and non merged
    columns.
    With this data I have one merged column which spans 6 rows, and then in the same row I have 6 rows with different points in, and what I want to do is filter my list, but be able to see the merged coloum aswell as the 6 points.
    Any ideas are much appriciated.
    Cheers
    SAN

    You cannot filter across a row - so I have assumed that what you mean is that some cells in columns are merged to serve as the headers, and the data is in the came columns but in the row(s) below the header. If this is incorrect, ignore this post.
    For this example, I have assumed E to J are the column of data and merged cells, column K is free, and the first merged header is in row 1:
    In K1, enter
    =E1
    in K2, enter
    =IF(COUNTA(E2:J2)=1,E2,K1)
    and copy down. Then filter based on column K, and it will show the headers and the data for the selected header value.
    HTH, Bernie

  • Filter and non-conforming dimensions

    I have a model design which includes three fact tables with non-conforming dimensions. This causes BI to create multiple queries for a report and finally bring the results together using a full outer join at the end. When I attempt to filter on a field from one of the non-conforming dimensions, that filter is not applied at the full outer join step but during an earlier step related to the chosen filter field. This results in more data returning then desired. I need to move the filter to the full outer join step. Here are two sub-optimal methods that I have found to work around the issue.
    1. Build the logical query in answers. Then, wrap that query with an outer query and apply the filter to the outer query.
    2. Build a minus query in answers that removes the records you don't want to see.
    I have seen this issue discussed in other threads, but I haven't found a good solution. Does anyone have any recommendations?
    Thank you,
    Edited by: user10715047 on May 21, 2010 7:13 AM
    Here is a decent description of the problem
    http://siebel.ittoolbox.com/groups/technical-functional/siebel-analytics-l/two-fact-tables-and-nonconforming-dimensions-3297052
    and this is a better solution than the two I stated above. However, even this solution is not going to be very intuitive for the users
    http://siebel.ittoolbox.com/groups/technical-functional/siebel-analytics-l/two-fact-tables-and-nonconforming-dimensions-3298529

    i know this is a bit old thread but thought it might be helpful to someone who came across the same issue...
    when using Degener@teDimen$ion ( this is !nner joned to FACT tables in BMM) and if any of the dimensions {other than theDegener@teDimen$ion (Let us say Dim X) } have an ()uter join to any of the fact tables, and you were doing your analysis using Degener@teDimen$ion,  Dim X, Measure value you will face the following issues.
    when filtering the analysis on the ()uter join dimension ( Dim X), the IN filter will not work. Reason is that the filter is getting applied to both the Dimension and FACT tables and the values that exist in Dimension Dim X but not in FACT table wont show up.
         The above issue can be fixed by changing the join between the fact and Degener@teDimen$ion from inner to outer.
              Please mark if you found this helpful.

  • Any way to non-destructively edit in PSE and save edits ?

    any way to non-destructively edit in PSE ? or is this only available in CS and LR ....I would like to use Viveza and to be able to save my edits.
    thanks

    It is the way Apple has chose to deal with external editor edits.... as I have wrote below in various other threads so no one else has to talk to someone for over two hours on the same thing....
    Okay.... so after being on the phone with an Apple senior developer for 2 hrs and 29 minutes, the duplicating of originals has been completely intentional.
    Basically, in a nutshell, we have lost the ability to revert back to previous versions of a photo. So, KEEP YOUR ORIGINAL!!!!!!!! Once you make an edit in an external editor, there is no going back unless you go back to the original. There is no reversing any edit in an external edit.
    It only makes a copy off the original. So, if you make an edit of an edit, you will only have access to the photo where it is after the second edit. The revert to original does not work on external edits.
    I gave suggestions on how to make an original automatically hide or be tagged somehow so it can be hidden and also a check before you delete..... I discussed many different ways of going about this (other than reverting to the way '09 worked)... not sure what they will come up with. But, I played around a lot with this new way of editing... and I could give a full scenario of what is happening with your photos as you edit... but, basically, it seems that Apple has met Windows in this '11 upgrade in that if you want to access different steps of a series of edits, you need to make copies as you go.

  • FCPX 10.0.7 hangs when loading multiple projects - a non-destructive work around

    FCPX 10.0.7 hangs when loading multiple projects - a non-destructive work around....
    Hi Guys, I upgraded from FCPX 10.0.6 to 10.0.78 and discovered to my immense frustration that many of my 100+ FCPX projects would not load.
    Symptom:
    FCPX 10.0.7 appears to hang ("spinning beachball") with either high CPU% on a single core or  with 1-2% CPU busy (activity monitor.app) when a FINAL CUT PROJECT is selected. No logs error information in /var/logs ..hmm
    notices that the "status circle" spins around endlessly.
    Actvity Monitor.app shows "Final Cut Pro (Not Responding)
    and also I notice in some cases that an OEM filter ("NEAT VIDEO NOISE REDUCTION") is in progress to be loaded. FCPX V10.0.7 must be FORCED QUITtted to remove it from system.
    History: Upgraded FCPX from V10.0.6 to V10.0.07.
    "FINAL CUT EVENTS" and "FINAL CUT PROJECTS" on a single file system on SAS 16TB disk array (768MB/sec read via AJA system test) and 60% utilised capacity… FCP EVENTS has 400 events in it .. some 5.2TB of Prores essence etc .. all works fine! (i.e. the storage system is first class and is NOT the issue)
    32GB of RAM on MAC PRO 2009 Nehalem 16 x Vcore with ATI 5780 card and ATTO HBA's (i.e. plenty of resources!)
    As usual with any "upgrade" to FCPX 10.0.? all the EVENT objects need to be upgraded. In my case this takes an hour or two.. so I do that when asleep.
    After experiencing above, I restored the FCPX EVENTS LIbrary of that file system from LTO4 tape archive (BRU-PE ) and still had the same issues as above.
    WORKAROUND:
    FORCEd QUIT FCPX V10.0.7
    RENAMEd "/volumes/some_file_system_volume/Final Cut Projects" to "/volumes/some_file_system_volume/Final Cut Projects_original"
    created (make) a new "/volumes/some_file_system_volume/Final Cut Projects" (use finder)
    For each project !!! in "/volumes/some_file_system_volume/Final Cut Projects_original", (do one project at a time!!!)
    MOVE "/volumes/some_file_system_volume/Final Cut Projects_original/one_fcpx_projext_nnnn_folder" to "/volumes/some_file_system_volume/Final Cut Projects"
    Make sure you move ONLY one project at a time. If you have a subfolder of projects, please do each project one at a time (serially!)
    Launch FCPX V10.0.7 and BE PATIENT!!! .. DONT click or fiddle with the UI.. it seems when you intervene it locks up as well…
    Let FCPX V10.0.7 settle….
    select the project you just added above  and RELINK any objects it needs. Thumbnails and proxies will be rendered again.. just be patient
    wait until ALL the rendering as stopped.
    QUIT FCPX V10.0.7
    (now if FCPX locks up, just force it out and start again as above).
    repeat for all projects in "/volumes/some_file_system_volume/Final Cut Projects_original" (go to step 4 ad do until all projects moved)
    When all is COMPLETED MAKE SURE YOU ARCHIVE an instance (or make a backup ) of "/volumes/some_file_system_volume/Final Cut Projects"
    If this procedure has worked the folder "/volumes/some_file_system_volume/Final Cut Projects_original" will have zero (nill, none) projects in it.
    I have managed to restore ALL my  "/volumes/some_file_system_volume/Final Cut Projects" this xmas between drinking etc. I'm satisfied that its all ok.
    Other issues:
    use DISK WARRIOR or TT PRO 6 to make sure that the file system volume where your  "/volumes/some_file_system_volume/Final Cut Projects" are is physically ok. I noticed some entries in file system's  volume table that represented objects in  "/volumes/some_file_system_volume/Final Cut Projects" were at fault when I used these utilities… FWIW.
    SUMMARY: yes this took ages to d, however luckily I had everything in at least 3 instances in an archive which has saved me many time in the old FCP& and prior days… it was just a matter of time to put it back together.
    I put this outage down to may be my own impatience when I first fired up FCPX 10.0.7 after the upgrade.
    I'm interested if this workaround is helpful to others and in addtiion if others have a more satisfactory remedy.
    HTH
    warwick
    Hong Kong

    Hi Eb, yeah I could not see any "memory leak" or unusual consumption of REAL memory whose less availability would cause excessive PAGEing and SWAPping as seen in the Activty Monitor.app
    I watch this carefully especially the use of REAL MEMORY by 3rd Paryy apps. BTW there are a few that cause ALLOCATED but NOT USED memory (blue in the A.M.app UI). Simply a unix PURGE command can release that memory and help clean up the PAGE and VM swap files (its alleged!).
    Yes, you may be might with the element of "Luck" involved. I would add though, that having MULTIPLE project displaying in the STORYLINE window and loading always caused my FCP 10.0.7 jam up at startup with h symtoms and observations I described.
    TIP: I might also add that for a super speedy launch of FCPX one may also emply setting each PROJECT's UI to show only AUDIO thus negating the need to contstruct or reneder out a PREVIEW ui in each clip in the storyline.
    Your/Apples  suggestion of the movememt (rename) of "Final Cut Pro Projects" to "Final Cut Pro Projects Hidden" is similar to what I proposed above to stop FCPX 10.0.7 accessing and building it up at startup. This workaround has been useful inthe past as well.
    ALso one might also get the stick out and remove (delete/rm) the ~/Library/Saved Application State/com.apple.FinalCut.savedState in one's home ~/Library so that FCPX wont do such a neat job reinstating FCPX last time you crashed it... This has been helpful also in diagnosing my issues.
    Lastly I have noticed that:
    impatience clicking on the FCPX UI when in the unstable state causes it to lock up with NO visible CPU% busy.... as if its waiting on something which is usually me MEMTERMing it via FORCE QUIT and
    the projects where I have employed the NEAT VIDEO Noise Reduction OEM filter for FCPX seem to exasserbate the PROJECT loading issues when several PROJECTS are available at FCPX startup time.
    As of yesterday I have some 400+ EVENTS Final Cut Pro Events and 130+ projects of varying compexites in a single file system on Final Cut Pro Projects all working fine and as good as gold again!
    Oh and one more thing, I had to RE-RENDER many projects of them again... strange as the FCP PROJECT library was renstated from a recent LTO archive as of V10.0.6 FCPX.. strange that...I would have expected if the projects and events were 10.0.6/7 compatable as proposed by Tom, that this would not be necessary... hmm straneg that
    I'll monitor this thread.
    Thanks for your comments lads!

  • How can I tell if a non-destructive crop has been applied when opening an image?

    I've wrapped my head around how to reclaim the stripped-out portion of an image that has been non-destructively cropped in CS6: click the image with the crop tool, or select Reveal All from the Image menu. Short of doing this every time I suspect that I may have cropped an image, is there anything in Photoshop's interface to tell me at a glance if the image has been non-destructively cropped? I guess I could check the document dimensions in the pop-up status display at the bottom of the window, but I'm looking for something more direct that doesn't make me search.
    2009 iMac 3.06 GHz Core 2 Duo; OS 10.8.1
    Jeff Frankel

    If you primarily edit files from a particular camera, then your idea to set the status box at the lower-left to show the image dimensions is a good one - just watch for dimensions that are not the norm. 
    You shouldn't have to set the readout to Document Dimensions more than once, though...  Open one image, set that field to read Document Dimensions, then close the document and quit Photoshop gracefully.  From now on it ought to read Dimensions when you open a new document.
    I wasn't aware 10.8.1 was out.
    -Noel

  • Paging and count of rows

    Hi,
    I have a procedure doing paging and returning the count of rows:
    As you see I used 2 select, one for paging and one for getting total row counts.
    Is there a way to get rid of "SELECT count(*) into PO_TOTAL FROM TABLE1;"?
    CREATE OR REPLACE PACKAGE BODY MYPACKAGE AS
    TYPE T_CURSOR IS REF CURSOR;
    PROCEDURE SP_PAGING
    PI_STARTID IN NUMBER,
    PI_ENDID IN NUMBER,
    PO_TOTAL OUT NUMBER,
    CUR_OUT OUT T_CURSOR
    IS
    BEGIN
    OPEN CUR_OUT FOR
    SELECT *
    FROM (SELECT row_.*, ROWNUM rownum_
    FROM (
    SELECT column1, column2, column3 FROM TABLE1
    ) row_
    WHERE ROWNUM <= PI_ENDID)
    WHERE rownum_ >= PI_STARTID;
    SELECT count(*) into PO_TOTAL FROM TABLE1;
    END SP_PAGING;
    END MYPACKAGE;

    Yes, I can reproduce that:
    SQL> create table emp1 as select * from emp
      2  /
    Table created.
    SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP1');
    PL/SQL procedure successfully completed.
    SQL> EXPLAIN PLAN FOR
      2  SELECT  ename,
      3          job,
      4          sal,
      5          cnt
      6    FROM  (
      7           SELECT  ename,
      8                   job,
      9                   sal,
    10                   count(*) over() cnt,
    11                   row_number() over(order by 1) rn
    12             FROM  emp1
    13          )
    14    WHERE rn BETWEEN 4 AND 9;
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 1444408506
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |    14 |   728 |     4  (25)| 00:00:01 |
    |*  1 |  VIEW               |      |    14 |   728 |     4  (25)| 00:00:01 |
    |   2 |   WINDOW BUFFER     |      |    14 |   252 |     4  (25)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| EMP1 |    14 |   252 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("RN">=4 AND "RN"<=9)
    15 rows selected.
    SQL> EXPLAIN PLAN FOR
      2  SELECT *
      3  FROM (SELECT row_.*, ROWNUM rownum_
      4  FROM (
      5  SELECT ename,job,sal, count(*) over() FROM emp1
      6  ) row_
      7  WHERE ROWNUM <= 9)
      8  WHERE rownum_ >= 4;
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 519025698
    | Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |      |     9 |   468 |     2   (0)| 00:00:01 |
    |*  1 |  VIEW                 |      |     9 |   468 |     2   (0)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |      |       |       |            |          |
    |   3 |    VIEW               |      |     9 |   351 |     2   (0)| 00:00:01 |
    |   4 |     WINDOW BUFFER     |      |     9 |   162 |     2   (0)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL| EMP1 |     9 |   162 |     2   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("ROWNUM_">=4)
       2 - filter(ROWNUM<=9)
    18 rows selected.
    SQL> However, you interpreted it incorrectly. Your query will still fetch all rows in TABLE1. Optimizer is smart enough to stop creating window buffer after 9 rows and that is what you see 9 in explain plan, while my query builds 14 row window buffer and then filters. So your query will be a bit faster.
    SY.
    Edited by: Solomon Yakobson on Jan 25, 2009 2:51 PM

  • Problem between SSMS and Report ! Filter query not showing data or showing wrong data

    Hi all,
    In short: I have a report with multiple values such as name of shop, postal code etc. The parameters have no default value and act as "like". If someone enters "krant" he'll get all the shop names that contain "krant" in their
    name etc. 
    Same goes for postal code, if someone enters 2550 he'll get all data for 2550. The problem although is that if a user start with postal code as parameters and leaves the shop name empty, the shop name is not shown in my report! The other way around it works,
    when I enter shop name i'll get all shops + postal code in my report.
    I know this is because POS name cannot be shown in report because it's left blank, but I want my MDX query to be able to give me the POS names even if I only enter postal code.
    Can someone please please look at my query below? I need to add 4 more parameters this way later on too!
    SELECT 
    [Measures].[Sales amount]
     ON COLUMNS, NON EMPTY
    Filter(
                                   [Point of sale].[POS name].AllMembers,
                                   InStr(
                                                   [Point of sale].[POS name].CurrentMember.MEMBER_CAPTION,
                                                   @PAR_POSName
                                   ) > 0
    *Filter(
                                   [Point of sale].[POS postal code].AllMembers,
                                   InStr(
                                                   [Point of sale].[POS postal code].CurrentMember.MEMBER_CAPTION,
                                                  @PAR_POSTAL_CODE
                                   ) > 0
    *[Point of sale].[Client id].[Client id]
    *[Point of sale].[POS id].[POS id]
    *[Point of sale].[POS street].[POS street]
    *[Point of sale].[POS town].[POS town]
    *[Point of sale].[POS housenumber].[POS housenumber]
    ON ROWS
    FROM [mycube]

    You have to use
    StrToMember (MDX) /
    StrToSet (MDX) /
    StrToTuple (MDX) to convert the Parameter; see
    Parameterized Reporting Services Reports with Analysis Services as a Data Source
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

Maybe you are looking for

  • Can I retrieve data from a multiple select query?

    I recently have been able to consolidate many queries into one large one, or into large compound queries. I found an instance where I am using a select query in a for loop that may execute up to 400 times. I tried building a huge compound query, and

  • Issue while generating Report

    I am trying to generate a rport and it is going in Generation running mode. I checked CG5Z and the line item created there is not getting assigned to the Server defined. I tested the RFC and generation server they are properly defined. I ran program

  • Character coersion

    I have an interesting problem, using JSTL 1.1.2. I have a bean method which provides a Map of Character objects, like this: private Map myMap; ...snip...    Character myChar = new Character('X');    myMap.put("hello",myChar); ...snip... public Map ge

  • How to pull payment terms attached to customer, Invoice, Agreement

    Hi All, Could you please help me how to pull the payment terms attached to a particular Invoice, customer and agreement. Thanks for your help, Prathima

  • HR Payroll ALV Report

    Dear experts, i have one zreport which shows the Payroll of employee, I m getting the out put of Single Employe only how can i get all user list employee payroll, Selection screen is GET PERNR My Code which i need to pass all employee NUmber is CALL