SUBSET OF RECORDS

given
"SELECT * FROM BLAH" returns 2,000,000 records
that i want to display them in 1000 record pages
a fresh query must be made for each page
is there a best practice for getting say records 10000 -> 11000
without having to get a complete resultset and seeking to 10000

Nei, yes there are ways to extract a small subset of records though some coding may be required:
With version 8.1+ the exp utility has the ability to add a where clause condition to the export but only one query is allowed that must work for every table in the export set.
You can create a database link on the test system and select from production only those rows you want across the link.
You can query the tables and spool a delimited file of the desired output and use sqlldr to insert the data into test.
HTH -- Mark D Powell --

Similar Messages

  • Displaying a subset of records is limiting the information I can show

    Hello All,
    I'm developing my photo gallery's TOC page and when I began, I had lots of data I wanted to display and through the use of lookup tables and the proper logic in my SELECT query, I could show a vertical list of galleries that matched a specific category. With each was listed two sample thumbnails, title, description, model, photographer and make-up artist names. I even was able to set the page title and header to display the proper information.
    An additoinal SELECT query counted the number of galleries in the main table that were relevant to the present query string value.
    Another SELECT query counted the number of photos in each different gallery photoset and displayed that information in EACH gallery's title.
    The description is in reverse to the flow of the code, keep in mind. Here is the code (the $spec variable is that of the query string.):
    //get the total number of photos in a gallery photoset
    $conn = dbConnect('query');
    $getTotal = "SELECT COUNT(*)
                FROM photos, galleries
                WHERE galleries.g_spec = '$spec%'
                AND galleries.g_id = photos.g_id ";
    $total = $conn->query($getTotal);
    $row = $total->fetch_row();
    $totalPix = $row[0];
    // get the number of galleries
    $conn = dbConnect('query');
    $getGtotal = "SELECT COUNT(*)
                FROM galleries
                WHERE galleries.g_spec = '$spec%' ";
    $g_total = $conn->query($getGtotal);
    $row = $g_total->fetch_row();
    $galTotal = $row[0];
    //get gallery thumbnails, name, description, subject, category, model, photographer, mua
    $conn = dbConnect('query');
    $sql = "SELECT g_thumb1, g_thumb2, g_name, g_desc, subject_id, category_id, model_name, photographer_name, mua_name
            FROM galleries, gallery_spec, model_names, photographer_names, mua_names
            WHERE galleries.g_spec = '$spec%'
            AND gallery_spec.g_spec = galleries.g_spec
            AND model_names.model_id = galleries.g_model
            AND photographer_names.photographer_id = galleries.g_photographer
            AND mua_names.mua_id = galleries.g_mua
            ORDER BY g_id DESC ";
    $result = $conn->query($sql) or die(mysqli_error());
    $galSpec = $result->fetch_assoc();
    *** end of code.
    Anyhow, this worked wonderfully. However, as I add galleries, this list will get quite long. So, I wanted to add navigation which would break up the number of galleries into set quantities.
    When I introduced parameters to LIMIT the number of queries displayed based on a variable called SHOWMAX, my displayed data got all botched. admittedly, I'm adapting code from David Powers' book PHP Solutions and am learning as I go. I gave up on displaying all of the data shown above and went the simple route, figuring I'd add more features as I learned. This necessitated restructuring my database a bit.
    Here is the code I added that allows for navigation based on a limit of 3 entries per page:
    // set maximum number of records per page
    define('SHOWMAX', 3);
    // get the number of galleries
    $conn = dbConnect('query');
    $getGtotal = "SELECT COUNT(*)
                FROM galleries ";
    $g_total = $conn->query($getGtotal);
    $row = $g_total->fetch_row();
    $galTotal = $row[0];
    // set the current page
    $curPage = isset($_GET['curPage']) ? $_GET['curPage'] : 0;
    // retrieve subset of galleries
    $conn = dbConnect('query');
    // calculate the start row of the subset
    $startRow = $curPage * SHOWMAX;
    $sql = "SELECT *
            FROM galleries
            ORDER BY g_id DESC
            LIMIT $startRow,".SHOWMAX;
    $result = $conn->query($sql) or die(mysqli_error());
    $galSpec = $result->fetch_assoc();
    *** end of code.
    Along with some navigation code in the body:
    <div id="header4">
        <p>Displaying <?php echo $startRow+1;
              if ($startRow+1 < $galTotal) {
                echo ' to ';
                if ($startRow+SHOWMAX < $galTotal) {
                  echo $startRow+SHOWMAX;
                else {
                  echo $galTotal;
              echo " of $galTotal";
              if ($curPage > 0) {
                          echo '<a href="'.$_SERVER['PHP_SELF'].'?curPage='.($curPage-1).'"> Back </a>';
              if ($startRow+SHOWMAX < $galTotal) {
                          echo '<a href="'.$_SERVER['PHP_SELF'].'?curPage='.($curPage+1).'"> Next </a>';
              ?>
    I can now display my gallery information in groups of 3 per page.
    THE PROBEM IS that when I add additional queries, say to get the subject and category information, model, photograpger and or make-up artist information, it doesn't always come up, sometimes it messes up the display, and in all cases, this additional information disappears when I navigate from subset to subset. This is why I have been asking about carrying values from one page to the next.
    I am absolutely stonewalled and am going to have to put the galleries into service sans navigation until I can figure it out. I'm willing to even hire someone for advice. I just can not make this work. Yes, I realize it's complex for someone of my skill level, but I have no choice, I have to make this work.
    Alternate forms of navigation/showing subsets of records would be greatly appreciated. ANY help, suggestions or guidance will be greatly appreciated.
    Thank you for reading through all this.
    Most sincerely,
    wordman

    Again you are misunderstanding. I'm not suggesting hardcoding any values. I am only asking if you are/were trying to pass the actual subject and category descriptions, or the id's of category and subject so you can requery the database on each page.
    > I don't think the page re-queries on navigating, I think it just loads  info from the array.
    I don't think so. Just look at the SQL. It's using the limit and offset values which means to me that it's only returning the rows for the current page. What array are you talking about?
    >So, I added a separate query to pick this info from the DB, and it bombs  the page.
    I really believe you are going about this all wrong. Right now you are searching for solutions to specific problems without having a solid foundation in the technologies involved. Trying to build as you learn is a recipe for disaster and frustration. Stop what you are doing for a while. Work on some php and SQL tutorials that are unrelated to this project. Work through them until you understand what each line of code is doing.
    Right now you are headed down a rat hole. You may get most of it to work, but you will be plagued by bugs and scripts that are difficult to maintain.

  • Hiding a subset of records

    We import portfolio data from another system into On Demand. All portfolio data is read-only. Role A users are allowed to view all portfolio records. Role B can only view some of the portfolio records. If Role B is looking at an Account, it is very likely that the Account is related to both visible and non-visible portfolio records.
    Here is my thinking on how to meet this need (I'd like some feedback):
    Both roles need to have portfolio access (step 2 of role definition). Without it, nobody sees portfolio records.
    Set the default access profile for both roles to have Read-Only access for portfolio (no updates allowed).
    Create a Book that contains the portfolio records that are NOT visible to Role B. It is OK to assume there is an attribute in the portfolio record that workflow can use to assign the Book for the record.
    Assign each Role B user to the Book with Read-Only access.
    Question: Will this approach meet the stated need? I'm assuming that if a record is in a book, the access will be limited to those users assigned to the book. In other words, the book limits visibility to users of the book even if all users have read-only access to portfolio. The book becomes the "filter".
    Additionally, will Role B users viewing the Portfolio related list on Account see any of the "invisible" records? I'd prefer that they not see them, but can live with it if they do.
    Regards,
    Jeff

    After a few hours of experimentation, I'll answer my own post.....
    Two changes to my approach were necessary. The Portfolio "Can See All Records?" checkbox for Role B is NOT checked. Then use a book for those VISIBLE records. Assign each Role B user to the book with read-only access.
    Unfortunately, the Account Portfolio related list for a Role B user shows ALL related Portfolio records, not just those within the book. Of course, the user can only open those records that are within the book. Sure wish there was a way to hide the records the Role B user isn't supposed to see.
    If you have an idea, let me know.
    Regards,
    Jeff

  • How do I add a subset of records to a set of records?

    I have a result set of consultants and branch offices and I need to add a subset of employees to each branch office.
    I added an Employee[ ] subset to my consultant getter/setters. A PL/SQL package returns a cursor result set
    of consultants and branch offices and another cursor result set of employees. Each employee is assigned an
    office ID and each consultant and branch office is assigned an office ID.
    I am unsure in Java how to traverse the employee result set to get each subset of employees with the correct office.
    Here is the code that calls the PL/SQL and loads my vector array with consultants and branch offices:
        public Consultant[] getConsultants(String pvConsultant_Firm,
                                            String pvAddr_City_Main,
                                            String pvAddr_State_Main,
                                            String pvAddr_Zip_Main, 
                                            String pvAddr_City_Branch,
                                            String pvAddr_State_Branch,
                                            String pvAddr_Zip_Branch,
                                            String pvResidency,
                                            String pvFirst_Name,    
                                            String pvLast_Name,
                                            String pvOrder_By_Office,
                                            String pvOrder_By_Employee,
                                            String display_Branch,
                                            String display_Employee)
          Vector retval = new Vector();
          DBConnection conn = new DBConnection();
          CallableStatement cstmt = conn.prepareCall("begin " + PACKAGE + "Get_Consultant_Cursors(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?);end;");
          ResultSet rsConsultant;
          ResultSet rsEmployee;
          Consultant curConsult;
          Employee curEmployee;
          try
             if(cstmt!=null)
                String Consult_Type = null;
                if ("true".equalsIgnoreCase(display_Branch) ||
                     "true".equalsIgnoreCase(display_Employee)){
                   Consult_Type = null;
                else {
                   Consult_Type = "MAIN";
                cstmt.setString("pvConsultant_Firm", pvConsultant_Firm);
                cstmt.setString("pvOffice_Type", Consult_Type);
                cstmt.setString("pvAddr_City_Main", pvAddr_City_Main);
                cstmt.setString("pvAddr_State_Main", pvAddr_State_Main);
                cstmt.setString("pvAddr_Zip_Main", pvAddr_Zip_Main);
                cstmt.setString("pvAddr_City_Branch", pvAddr_City_Branch);
                cstmt.setString("pvAddr_State_Branch", pvAddr_State_Branch);
                cstmt.setString("pvAddr_Zip_Branch", pvAddr_Zip_Branch);
                cstmt.setString("pvResidency", pvResidency);
                cstmt.setString("pvFirst_Name", pvFirst_Name);
                cstmt.setString("pvLast_Name", pvLast_Name);
                cstmt.setString("pvOrder_By_Employee", pvOrder_By_Employee);
                cstmt.setString("pvOrder_By_Office", pvOrder_By_Office);
                cstmt.registerOutParameter("pcurConsultant_Office", OracleTypes.CURSOR);
                cstmt.registerOutParameter("pcurConsultant_Employee", OracleTypes.CURSOR);
                cstmt.execute();
                rsConsultant = (ResultSet)cstmt.getObject("pcurConsultant_Office");
                rsEmployee = (ResultSet)cstmt.getObject("pcurConsultant_Employee");
                while(rsConsultant.next())
                    curConsult = getConsultant(rsConsultant);
                    retval.add(curConsult);
                     if ("true".equalsIgnoreCase(display_Employee)){
    HOW DO I HANDLE THIS?
                 rsConsultant.close();
                 conn.closeCstmt(cstmt);
              catch (SQLException e)
                 conn.logToFile(this, "getConsultants()", e);
              conn.close();
           return (Consultant[])retval.toArray(new Consultant[retval.size()]);
        }

    It will basically look something like this:
    Consultant A           Main Office
                                                 Employee 1
                                                 Employee 2
                                                 Employee 3
                           Branch A       
                                                 Employee 4
                                                 Employee 5
                           Branch B
                                                 Employee 6
                                                 Employee 7
    Consultant B           Main Office
                                                 Employee 8
                                                 Employee 9
                           Branch A       
                                                 Employee 10
                                                 Employee 11The consultant and branch offices are in one result set and the employees are in another. I need to combine them
    so I can return them and display them as above in my JSP.

  • Can the GetEvents Web Service return a subset of events for a given record

    Page 173 of Siebel Web Services On Demand Guide Version 3.1 (CRM On Demand Release15) Rev. A, states the following:
    "You can return events for all record types, or a subset of record types, depending on how you prepare the WSDL files associated with the Integration Event service,"
    The IntegrationEventWS_GetEvents_Input type in the Integration Event WSDL that is availble only contains an "Event Count" parameter for the number of events to retrieve and appears to have no way to specify a record type in order to retrieve a subset of the events as stated above.
    Is there any way to retrieve a subset of the Intgeration Events by record type?
    Thanks
    John

    Customer care have replied to me and the line qouted above is a documentation error and there is no way to query the GetEvent Web Service to return a subset of events for a given record.

  • Query to fetch a set of records

    My query is expected to fetch a huge number of records, say 20000, depending on the parameters passed through front end. The front end developer wants that these records should be displayed on different pages with each page displaying 100 records. But the next 100 records should be fetched whenever user selects next page option i.e. the query should execute 20000/100 times. If I use order by and rownum (i think can be used, not tried), again it will require sorting and the operation will be slow which is to be avoided. What should be done in this case? Front end is Java and database is Oracle 9i.

    > Will there be any difference in performance in these
    2 approaches ?
    1) Fetch all records with a single query, store in an
    array (in Java) and keep displaying 100 reocrds per
    page.
    2) Execute query multiple times, query will be using
    rownum. If if the query displays a subset of records,
    it has to fetch all the records evey time.
    One of these questions that results in a YES and NO answer.
    Simplistically, yes - significant performance improvement as subsequent page displays do not need to access the Database. The rows are locally cached.
    Unfortunately, some J2EE developers only think that far and code it like that on development. And performance is awesome.
    They fail to realise that problems with this approach are that
    a) it does not scale
    b) it lacks data consistency
    So it gets implemented. The 1st time 20 users hit that J2EE app server, it trashes memory attempting to create huge caches for each user - with each cache containing the all rows that the user may (or may not) be interested in.
    Performance degrades drastically as the swap space is totally trashed. The machine crashes.
    Clever J2EE architects respond with "oh, we must scale" - and want to purchase more hardware, configure these as J2EE servers and cluster the application tier. What a silly and expensive way to achieve scalability!
    The business user on the other hand sees data in the cache - it can be old. It may not reflect the current consistent data in the database. The business user does not know that. He tells the customer "Yes, we still have a 100 units at the special price.. not a problem, I will reserve all 100 for you, my special customer". And then get mud in his face as he proceeds with an attempt to sell a 100 non-existing units that have long since been sold.
    Hmmm.. so the clever J2EE architects come up with the idea of using persistence in the app tier by using something like EJB. Let's cache all the data! And let us share that data between users! Wow!! Great, it now solves that problem!
    But it does not. Instead something like EJB is VERY POOR attempt to emulate what the Database does, and does EXTREMELY WELL.
    J2EE suffers from Let-Us-Reinvent-The-Database-Wheel-In-The-Application-Tier approach. It is fatally flawed. The Database is much more than just a bit bucket wheel. The Database is much more scalable in design and use than a (Java) app tier.
    Despite what the Java prophets proclaim, the J2EE architecture is not The Saviour - and no substitution for common sense and logic.

  • Fastest way to select subset of a resultset?

    I have a query which I use to load a result set in Java 1,000 records each time., it looks like this:
    select DATA_CLOB as msg from
    (select DATA_CLOB, rownum as rn from msg_data where batch_id = ? and category = ? order by msg_data_id)
    where rn between ? and ?
    When the number of records for a given batch_id,category are small, say a few tens of thousands, its never been a problem
    But just recently it had to deal with one batch where there was 1.5 M records and this query is killing us, it's been 3 DAYS now and its not done yet!
    There are proper indexes on the columns selected from, and confirmed with Explain Plan that the plan is only doing indexed searches.
    Was the way we do the between 2 rownum indexes poor design?
    Is there a more efficient way?
    Other way I was thinking was maybe just do where rownum < 1000 each time instead of maintaining an index range which is incremented by a thousand each time.
    In either case I have both that select and an update which updates a timestamp on the rows that were processed, so I could use that in the where rownum < 1000 query to filter out rows already processed.
    Can you think of any other ways to do this quicker?
    Again, the sole purpose is to basically iterate through a subset of records from table, 1,000 at a time.

    trant wrote:
    I have a query which I use to load a result set in Java 1,000 records each time., it looks like this:
    select DATA_CLOB as msg from
    (select DATA_CLOB, rownum as rn from msg_data where batch_id = ? and category = ? order by msg_data_id)
    where rn between ? and ?
    When the number of records for a given batch_id,category are small, say a few tens of thousands, its never been a problem
    But just recently it had to deal with one batch where there was 1.5 M records and this query is killing us, it's been 3 DAYS now and its not done yet!Not surprising.
    It's having query all the data (including your CLOB data by the looks of it) and then order all that data before it picks out just the 1,000 rows you want.
    What is the business process you are trying to achieve by doing this?
    Seems odd that you want to query back 1,000 clobs of data at a time over the network to your java application.
    Other way I was thinking was maybe just do where rownum < 1000 each time instead of maintaining an index range which is incremented by a thousand each time.Still, if it's got to order the data first, it's going to take time.
    In either case I have both that select and an update which updates a timestamp on the rows that were processed, so I could use that in the where rownum < 1000 query to filter out rows already processed.
    Can you think of any other ways to do this quicker?
    Again, the sole purpose is to basically iterate through a subset of records from table, 1,000 at a time.Why? For what purpose?

  • Replicating MS Access single form record selectors functionality

    I am very new to APEX.
    I would like to rebuild and improve on all our departments MS Access userforms using APEX.
    At the moment I am trying to replicate the standard MS Access "single form" view of a record in a database table, with the record selector navigation buttons (prev, next, new record).
    I have scoured the internet and this forum but have not successfully found the solution.
    I need to be able to select a subset of records from a view/table and present them on a page one at a time, so the user can scroll through them (prev, next) and make edits. Because of the amount of info in each record, the whole screen will be taken up by each record (hence the single form view).
    I have created a form pagination process that is supposed to get the next/previous primary key value, however I can't seem to get the buttons displayed on the form that should be firing off these processes.
    What combination of features should I be using to replicate this functionality? I am using the "Form on a table/view" style page at the moment.
    Cheers,
    Richard.

    Hi
    Even if you created a form pagination process, you need to "initialise" the PK field(s) -- by default, they are null and the pagination buttons will not show (this is very different from how access works).
    You can do it using a conditional computation (e.g. selecting the min PK value when the PK field is null).
    I hope this helps.
    Luis

  • "ghost records" in partitioned tables

    Hi,
    We observe a very strange behavior of some partitioned table (range method) for a small subset of records. For example:
    select b.obj_id0 from event_session_batch_ctlr_t partition(partition_migrate) b, event_session_batch_ctlr_t c
    where b.obj_id0=c.obj_id0(+)
    and c.obj_id0 is null
    This query returns 20 records where it shouldn't returns anything! obj_id0 is the partitioning key and the primary key of the table.
    If you query these line directly from the partitioned table, even with a full scan, you don't get anything back. You will get the data if you query from that particular partition (partition_migrate).
    We found that these records can sometimes be returned from a query with a joint on this partitioned table, without specifying any partition name, depending on the execution plan.
    Have you some explanation for this strange behavior and suggestions for how to solve this problem?
    Thank you in advance,
    Raphaël

    Hi,
           Retrive those records from backup.So,better u  contact basis people.
          RTVDLTRCD   FROMFILE(xxx) TOFILELIB(yyy) - This Command is used to retrive the deleted records.
           This would extract the deleted  records in the FROMFILE and  write them to the same named file in the library specified for TOFILELIB.
                                          When a  record is deleted  in a data  base file, the data  values still  exist,  but  the system  places a  special hex  value  in front  of the     record specifying that  it is deleted.   The Operating System  prevents  any access to a deleted record thru its normal interfaces.

  • How do I get a subset of a result set

    I am getting a result set from a remote data source. After the resultset is returned how can I do a select on the result to return a subset of records?
    My code:
    package com.drawingpdmw1;
    import java.sql.*;
    import com.ibm.as400.access.*;
    * @author sde
    * TODO To change the template for this generated type comment go to
    * Window - Preferences - Java - Code Style - Code Templates
    public class getLastDashNumberConnection {
         public String errorString;
         public String literalerrorString;
         public String maxRecSeq;
         public getLastDashNumberConnection(String arg0) {
                      AS400JDBCDataSource datasource = new AS400JDBCDataSource("gvas400");
                 datasource.setTranslateBinary(true);
                 datasource.setUser("drawchg");
                 datasource.setPassword("webabc1");
                 try{
                      Connection connection = datasource.getConnection();
                     Statement stmt = connection.createStatement();
                     String sql = "select distinct dmdrawno from webprddt6.drawmext3 Where dmdrawno LIKE '%" + arg0.substring(0,6) + "%'";
                     ResultSet rs = stmt.executeQuery(sql);
                     while(rs.next()){
                          System.out.println(rs.getString("dmdrawno").trim());
                     errorString = "";
                     literalerrorString = "NoError";
                 catch(SQLException sqle){
                      System.out.println("Error");
                      System.out.println(sqle);
                      errorString = "Warning!!! A New Dash Number could not be created!";
                       literalerrorString = "Error";
    }

    First, thanks for the info on closing my objects.
    Now as far as the data here is my situation. The purpose of this class is to return the next dash number that can be used for a series of partnumbers. In most cases the series is constructed with only one type of dash numbers. Example: -401, -403, -405 dashes that start with a 4 are assemblies. This is easy to figure out the nex dash(-407).
    However, some of our partnumber series have dashes like -001, -003, -005 which are detail level part numbers along with the -401 etc.
    Now a user needs to create a new dash number. First, is this a detail dash or an assy dash. My idea is to get all the possibilites for a series, check for dash types, then return the next dash for each type.

  • How do I count specific, smaller groups of information in one large table?

    Hello all,
    I have a feeling the answer to this is right under my nose, but somehow, it is evading me.
    I would like to be able to count how many photos are in any specific gallery. Why? Well, on my TOC page, I thought it would be cool to show  the user how many photos were in any given gallery displayed on the screen as part of all the gallery data I'm presenting. It's not necessary, but I believe it adds a nice touch. My  thought was to have one massive table containing all the photo information and another massive table containing the gallery  information, and currently I do. I can pull various gallery information  based on user selections, but accurately counting the correct number of  images per gallery is evading me.
    In my DB, I have the table, 'galleries', which has several columns, but the two most relevant are g_id and g_spe. g_id is the primary key and is an AI column that represents also the gallery 'serial' number. g_spec is a value that will have one of 11 different values in it (not relevant for this topic.)
    Additionally, there is the table, 'photos', and in this table are three columns:  p_id, g_id and p_fname. p_id is the primary key, g_id is the foreign key (primary key of the 'galleries' table) and p_fname contains the filename of each photo in my ever-expanding gallery.
    Here's the abbreviated contents of the galleries table showing only the first 2 columns:
    (`g_id`, `g_spec`, etc...)
    (1, 11, etc...),
    (2, 11, etc...),
    (3, 11, etc...),
    (4, 11, etc...),
    (5, 12, etc...),
    (6, 13, etc...)
    Here's the contents of my photos table so far, populated with test images:
    (`p_id`, `g_id`, `p_fname`)
    (1, 1, '1_DSC1155.jpg'),
    (2, 1, '1_DSC1199.jpg'),
    (3, 1, '1_DSC1243.jpg'),
    (4, 1, '1_DSC1332.jpg'),
    (5, 1, '1_DSC1381.jpg'),
    (6, 1, '1_DSC1421.jpg'),
    (7, 1, '1_DSC2097.jpg'),
    (8, 1, '1_DSC2158a.jpg'),
    (9, 1, '1_DSC2204a.jpg'),
    (10, 1, '1_DSC2416.jpg'),
    (11, 1, '1_DSC2639.jpg'),
    (12, 1, '1_DSC3768.jpg'),
    (13, 1, '1_DSC3809.jpg'),
    (14, 1, '1_DSC4226.jpg'),
    (15, 1, '1_DSC4257.jpg'),
    (16, 1, '1_DSC4525.jpg'),
    (17, 1, '1_DSC4549.jpg'),
    (18, 2, '2_DSC1155.jpg'),
    (19, 2, '2_DSC1199.jpg'),
    (20, 2, '2_DSC1243.jpg'),
    (21, 2, '2_DSC1332.jpg'),
    (22, 2, '2_DSC1381.jpg'),
    (23, 2, '2_DSC1421.jpg'),
    (24, 2, '2_DSC2097.jpg'),
    (25, 2, '2_DSC2158a.jpg'),
    (26, 2, '2_DSC2204a.jpg'),
    (27, 2, '2_DSC2416.jpg'),
    (28, 2, '2_DSC2639.jpg'),
    (29, 2, '2_DSC3768.jpg'),
    (30, 2, '2_DSC3809.jpg'),
    (31, 2, '2_DSC4226.jpg'),
    (32, 2, '2_DSC4257.jpg'),
    (33, 2, '2_DSC4525.jpg'),
    (34, 2, '2_DSC4549.jpg'),
    (35, 3, '3_DSC1155.jpg'),
    (36, 3, '3_DSC1199.jpg'),
    (37, 3, '3_DSC1243.jpg'),
    (38, 3, '3_DSC1332.jpg'),
    (39, 3, '3_DSC1381.jpg'),
    (40, 3, '3_DSC1421.jpg'),
    (41, 3, '3_DSC2097.jpg'),
    (42, 3, '3_DSC2158a.jpg'),
    (43, 3, '3_DSC2204a.jpg'),
    (44, 3, '3_DSC2416.jpg'),
    (45, 3, '3_DSC2639.jpg'),
    (46, 3, '3_DSC3768.jpg'),
    (47, 3, '3_DSC3809.jpg'),
    (48, 3, '3_DSC4226.jpg'),
    (49, 3, '3_DSC4257.jpg'),
    (50, 3, '3_DSC4525.jpg'),
    (51, 3, '3_DSC4549.jpg');
    For now, each gallery has 17 images which was just some random number I chose.
    I need to be able to write a query that says, tell me how many photos are in a specific photoset (in the photos table) based on the number in galleries.g_id  and photos.g_id being equal.
    As you see in the photos table, the p_id column is an AI column (call it photo serial numbers), and the g_id column assigns each specific photo to a specific gallery number that is equal to some gallery ID in the galleries.g_id table. SPECIFICALLY, for example I would want to have the query count the number of rows in the photos table whose g_id = 2 when referenced to g_id = 2 in the galleries table.
    I have been messing with different DISTINCT and COUNT methods, but all seem to be limited to working with just one table, and here, I need to reference two tables to acheive my result.
    Would this be better if each gallery had its own table?
    It should be so bloody simple, but it's just not clear.
    Please let me know if I have left out any key information, and thank you all in advance for your kind and generous help.
    Sincerely,
    wordman

    bregent,
    I got it!
    Here's the deal: the query that picks the subset of records:
    $conn = dbConnect('query');
    $sql = "SELECT *
            FROM galleries
            WHERE g_spec = '$spec%'
            ORDER BY g_id DESC
            LIMIT $startRow,".SHOWMAX;
    $result = $conn->query($sql) or die(mysqli_error());
    $galSpec = $result->fetch_assoc();
    picks 3 at a time, and with each record is an individual gallery number (g_id). So, I went down into my code where a do...while loop runs through the data, displaying the info for each subset of records and I added another query:
    $conn = dbConnect('query');
    $getTotal = "SELECT COUNT(*)
                FROM photos
                WHERE g_id = {$galSpec['g_id']}
                GROUP BY g_id";
    $total = $conn->query($getTotal);
    $row = $total->fetch_row();
    $totalPix = $row[0];
    which uses the value in $galSpec['g_id']. I didn't know the proper syntax for including it, but when I tried the curly braces, it worked. I altered the number of photos in each gallery in the photos table so that each total is different, and the results display perfectly.
    And as you can see, I used some of the code you suggested in the second query and all is well.
    Again, thank you so much for being patient and lending me your advice and assistance!
    Sincerely,
    wordman

  • Update a table in the database from a report

    The user needs to print a report multiple times for the day but needs to knwo whether or not the data was already printed so that it is not printed again...therefore I have a y/n/all field on the report to allow the user to choose what subsets of data to print...
    Problem is, how do I actually update the data base table and the subset of records currently printed as printed from the report runtime....how do I code this ...
    Please help......

    oyu can use the following to insert or update in data base from reports
    CMD_LINE := 'insert into Table values
    ('''||:toDate||''','||a||')';
    SRW.DO_SQL(CMD_LINE);
    CMD_LINE := 'update Table set net = '||a||'
    where ddate ='''||:todate||'''';
    commit ;

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • Custom Transaction for maintenance with conditions

    Hi all,
    My requirement is to create in SE93 a generic transaction with a call to sm30, with skip first screen edit mode.
    It's easy!
    Now I've to enter a conditions on a field of table maintened, to grant the view/add/modify of only a subset of records...
    PLEASE HELP ME!!!

    Hi
    You can create a report instead of a transaction calling SM30.
    The report has to call the fm VIEW_MAINTENANCE_CALL: here you can define your condition:
    DATA: T_COND TYPE STANDARD TABLE OF VIMSELLIST WITH HEADER LINE.
    T_COND-VIEWFIELD = 'VKORG'.
    T_COND-OPERATOR  = 'EQ'.
    T_COND-VALUE     = '1000'.
    T_COND-AND_OR    = 'OR'.
    APPEND T_COND.
    T_COND-VIEWFIELD = 'VKORG'.
    T_COND-OPERATOR  = 'EQ'.
    T_COND-VALUE     = '1001'.
    APPEND T_COND.
    CALL FUNCTION 'VIEW_MAINTENANCE_CALL'
      EXPORTING
        ACTION                       = 'U'
        VIEW_NAME                    = 'ZBW_EM_USA_V'
      TABLES
        DBA_SELLIST                  = T_COND
      EXCEPTIONS
        CLIENT_REFERENCE             = 1
        FOREIGN_LOCK                 = 2
        INVALID_ACTION               = 3
        NO_CLIENTINDEPENDENT_AUTH    = 4
        NO_DATABASE_FUNCTION         = 5
        NO_EDITOR_FUNCTION           = 6
        NO_SHOW_AUTH                 = 7
        NO_TVDIR_ENTRY               = 8
        NO_UPD_AUTH                  = 9
        ONLY_SHOW_ALLOWED            = 10
        SYSTEM_FAILURE               = 11
        UNKNOWN_FIELD_IN_DBA_SELLIST = 12
        VIEW_NOT_FOUND               = 13
        MAINTENANCE_PROHIBITED       = 14
        OTHERS                       = 15.
    IF SY-SUBRC <> 0.
      MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
              WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Max

  • Filter data from Powerpivot to Excel

    Hi Everyone
    Is there a way to filter data from Powerpivot to Excel so that you are only selecting a subset of records to pivot over in Excel?
    Paul

    Hello Paul,
    Your requirement isn't quite clear for me, can you explain it more detailed, please?
    If you add a PivotTable with PowerPivot as data source, then you can use slicer/filter in the PivotTable to filter a subset of the existing data.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

Maybe you are looking for

  • How to get the jclass from a jmethodID

    Hello guys, i'm new in jvmti. My first project, for my own, is a simple profiler. I want to trace every method enter and exist, so i impl. a static void JNICALL     cbMethodEntry(jvmtiEnv *jvmti, JNIEnv* env,                 jthread thread, jmethodID

  • How to create own SplitPaneDivider and how to use it

    Hi, I'm trying to create my own SplitPaneDivider and set it for one of my SplitPanes by doing: JSplitPane splitterPanel = new JSplitPane(JSplitPane.HORIZONTAL_SPLIT, true,listTablePanel, editorPanel); splitterPanel.setOneTouchExpandable(true); splitt

  • Random Artefacts with AVCHD/Mpeg Footage

    Hi, I have an issue with rendering an edit that contains .m2t clips (Windows explorer says they have AVCHD-Codec). I'm editing with Premiere Pro CS 5.5 on a Windows 7 Home OS.  The footage is HD 1920 x 1080i25.  A recording of a live perfomance of a

  • Error on every startup - after installing LR 5

    Lightroom encountered an error when reading from its preview cache and needs to quit. Lightroom will attempt to fix this problem the next time it launches. It then quits - restarts - and shows the message again - how do I get rid of that error? OS: M

  • Service error -44 1b 20

    Im gettin this error.