Result size

Hi all,
I would want to find the best way (in terms of performance)
of fetching a huge number (approx. 100, 000, eack record
abt 1k) records from my Oracle 8i database. Am using JDK 1.2.2
at the moment. Also, I would not want it to be a memory intensive operation.
One way would be to set a limit size on my SQL query, but that
would mean the PreparedStatement has to be executed several times!
But, having no limit on the SQL query would mean a relatively large
amount of memory used for the ResultSet.
What would be the best trade-off? Yup, and am writing the results
to a file.
Any suggested solutions here?
Thank you for time. Appreciate all replies.
- SweetLeaf; banned in S'pore :(

Hi all,
I would want to find the best way (in terms of
performance)
of fetching a huge number (approx. 100, 000, eack
record
abt 1k) records from my Oracle 8i database. Am using
JDK 1.2.2
at the moment. Also, I would not want it to be a
memory intensive operation. The best way is don't do it.
>
One way would be to set a limit size on my SQL query,
but that
would mean the PreparedStatement has to be executed
several times!
But, having no limit on the SQL query would mean a
relatively large
amount of memory used for the ResultSet.
What would be the best trade-off? Yup, and am writing
the results
to a file.
Any suggested solutions here?Writing to a file - so why don't you just use the tools that come with oracle?
Thank you for time. Appreciate all replies.

Similar Messages

  • How to findout the ViewObject result size in a jsff page?

    Hi
    I am finding trouble to display the Viewobject result size on my jsff page. I have dragged a view object on to JSFF as Updatable ADF table. I would like to print the result size on top of the table. So any one have any idea how to acheive this?
    I got this.. Please ignore
    *#{bindings.TestVO.estimatedRowCount} - This gives the fetched row count*
    Thanks in advance
    Edited by: user616296 on Feb 17, 2010 10:01 AM

    User,
    what is exactly your question?
    #{bindings.TestVO.estimatedRowCount} should give you the total number of rows matching your query, not the fetched size.
    Internally estimatedRowCount fires a select count(*) from (your query), so you should get the number your are looking for.
    Timo

  • Query Result Sizes Using LiveOffice

    I'd like to get a community response on this one:
    Can viewers of this post who have LiveOffice product experience answer a question for me?  I am using the Universe Query functionality to place data into Excel.  The result size is roughly 4500 rows by 6 columns of data.  If I remove a condition in the query to expand the results LiveOffice fails with this error:
    Failed to get the document information. (LO 26315)  Maximum character output size limit reached. Contact your BusinessObjects administrator. (Error: ERR_WIS_30272)(6315)
    After checking in Desktop Intelligence...I found the result size of the failed query to be just under 10,000 rows.
    I've been using LiveOffice for some time now, but this is the first instance where I have attempted to run a large amount of data into Excel.  I wouldn't call 10,000 rows a lot, but it is a lot more than I have done in the past.
    Can users of LiveOffice tell me the largest row result they have thrown into Excel using LiveOffice?  It can be with a report part or straight query.  I tried to provide with a webi report part and got the same failure.
    I've logged this with SAP Support and they're baffled so far...

    Hi Gregg,
    what's the data source behind your universe? Can it be the case that somewhere in your results exists a field that holds more than 32K. As far as I know Excel allows 32 K per cell.
    Regards,
    Stratos

  • IronPort Security Management Appliance - Directory Search Results Size

    I'm creating an access policy for a web security appliance that is applied to an authorized group within an idenity.  My question is in regards to the number of returned results when using the Directory search function to find and add the group.  Only the first 500 matching entries are shown and attempting to search for the group fails if it isn't part of that first 500.  How do I increase the amount of results returned when searching for groups?

    Hello Alex,
    By default, Active Directory does not respond to LDAP based queries which return more than 1000 results. If you have more than 1000 groups configured in Active Directory, it is necessary to increase the maximum page size (MaxPageSize) using the Ntdsutil.exe tool.
    http://support.microsoft.com/default.aspx?scid=kb;en-us;315071&sd=tech
    MaxPageSize - This value controls the maximum number of objects that are returned in a single search result, independent of how large each returned object is. To perform a search where the result might exceed this number of objects, the client must specify the paged search control. This is to group the returned results in groups that are no larger than the MaxPageSize value. To summarize, MaxPageSize controls the number of objects that are returned in a single search result.
    Default value: 1,000
    You can also simply input the group name and then click "Add" to manually add it as a workaround.
    Hope it helps.

  • Explain plan changes by result size from contains clause

    I use 10.2.0.3 Std Edition and have a query like this
    select t1.id from table1 t1
    where t1.col99 = 123
    and t1.id in (select ttxt.id from fulltexttable ttxt where contains (ttxt.thetext, 'word1 & word2'));
    (note: for each row in table1 exists at least one corresponding row in fulltexttable)
    Now I came across a surprising change in execution plans depending on the values of word1 and word2:
    - if the number of result rows from the subquery is low compared to all rows the full text index is used (table access by rowid/domain index)
    - if the number of result rows is high explain plan does not indicate any use of the domain index (full text index) but only a full text table scan. And the slow execution proves this plan.
    But: if I create explain plan for the subquery only there is no difference whether the number of result rows is high or low: the full text index is always used.
    Any clue for this change in execution strategy?
    Michael

    hi michael,
    this is expected behaviour. because you have a query incorporating more than just a text-index, and furthermore, multiple tables, the optimizer may choose different access paths according to the cardinality of your where clause terms. in fact, anything you see is actually vanilla behaviour.
    however, as i suppose, you probably have not yet heard about the "post filter" characteristic of a context index. see the tuning chapter of the text dev guide for more info. also note that the optimizer has no other way than accessing the context index directly iff you execute the subquery on its own (the "post filter" characteristic is not applicable here, because a post filter always needs some input to be filtered). and finally, be aware that oracle may unnest your subquery by its own decision, that is, do not try to force a direct context index access by a subquery, it will not work (a compiler hint is the only thing that works relyably).
    the only thing i can not follow is the fts for your second example. dont you have join indexes on table1.id and fulltexttable.id, respectively?
    p

  • How can I display more than one record with result set meta data?

    Hi,
    My code:
        ArrayList<String> resultList = new ArrayList<String>();
        rs=ps.executeQuery();      
        ResultSetMetaData rsmd = rs.getMetaData();      
        while(rs.next()){      
         for(int k=1;k<=rsmd.getColumnCount();k++){            
            resultList.add(rs.getString(k)); 
        ps.close();       
        }catch(Exception e){                                 
        e.printStackTrace();      
        return resultList;
        public String test(ArrayList result)throws Exception{ 
        String data=         
            "<tr>"+ 
            "<td class=normalFont>"+result.get(0)+"</td>"+ 
            "<td class=normalFont>"+result.get(1)+"</td>"+ 
            "</tr>"; 
        return data; 
        }  All the things are wroking but the problem is that ArrayList is displaying just one record whereas I have more than 20 records to display. I tried with loop like: i<result.size(); and result.get(i) then its throwing exception
    java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 I stuck here for the last more than 2 days. Please help me
    Best regards

    Raakh wrote:
    Still waiting .....I would have answered much earlier, but when I saw this little bit of impatience, I decided to delay answering for a while.
    ArrayList<String> list = new ArrayList<String>();
    list.add("abc");
    list.add("def");
    list.add("ghi");
    System.out.println(list.get(0));
    abc
    System.out.println(list.get(1));
    def
    System.out.printnln(list);
    [abc, def, ghi]That list has 3 items. Each one is a String.
    But here is what you appear to be doing:
    select * from person
    +-----+-------------+-------------+--------+
    | id  |  first name |  last name  | height |
    +-----+-------------+-------------+--------+
    |   1 | Joe         | Smith       | 180    |
    +-----+-------------+-------------+--------+
    |   2 | Mary        | Jones       | 144    |
    +-----+-------------+-------------+--------+
    for each row in ResultSet {
      for each column in ResultSet {
        list.add(that element);
    // which becomes
    list.add(1);
    list.add("Joe");
    list.add("Smith");
    list.add(180);
    list.add(2);
    list.add("Mary");
    list.add("Jones");
    list.add(144);
    System.out.println(list.get(0));
    1
    System.out.println(list.get(1));
    Joe
    System.out.printlN(list);
    [1, Joe, Smith, 180, 2, Mary, Jones, 144]That list has 8 items. Some of them are Strings and some of them are Integers. I would assume that, for this sample case, you would want a list with 2 items, both of which are Person objects. However, it really isn't clear from your posts what you are trying to do or what difficulty you're having, so I'm just guessing.

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • How to know exact size of table with blob column

    I have a table with one BLOB column. I ran this query.
    select bytes/1024/1024 from user_segments where segment_name='GMSSP_REQUEST_TEMP_FILES'
    (user_segments is a view)
    it gave me 0.125
    It means size of table is 0.125. I have uploaded 3 files to this table. Each of them is of 5 mb. After that I check size of table. but result was same. i.e 0.125.
    Can any body tell me how to know exact amount of space consumed by files. I am expecting following result
    size should be (5+5+5+0.125)MB
    Any help is appreciated.
    Thanks.
    Akie

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_1092.htm#i1581211
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:266215435203#993030200346552097

  • Mac OS creates huge PDF file - PDF file size too big

    Hi,
    I have a 80 K Word file. The file is one page, some text and a 12 KB .jpg file imported in. I then duplicated the jpg graphic 7 times (8 total images).
    Here are the resulting sizes when creating PDF's using different methods within Mac OS X print menu:
    Save as PDF 550 KB
    Compress PDF 5.2 MB !!!!
    Save as PDF w/ Quartz reduce file size 2.5 MB !!!!
    Compress PDF w/ Quartz reduce file size 2.1 MB !!!!
    I recreated (from scratch) the same content using Canvas. I got the following:
    Canvas X "Save as:PDF" 56 KB
    Mac OS print pdf 550 KB
    Mac OS print compress PDF 2.5 MB !!!
    Can anyone explain what is going on??? I can make the file available if you with to play with it.
    Version Details:
    Word 2004 (11.2)
    Mac OS X 10.4.7
    Thanks much,
    Mace
    Intel iMac 20", PB15 Alu, iMacG4 17, Sawtooth   Mac OS X (10.4.7)   Many other Macs from 128k on.

    I don't have distiller on my Mac... and the point (which I haven't gotten into) is that this is for a friend who just got a Mac. So, here I was touting the ease of use and she goes and finds a way to turn a 60 kB Word file into this monstrosity. So, the point is to avoid "extra steps" and additional software (which I eventually had to do anyway with Canvas X).
    I just don't understand how the Mac OS X PDF maker can take a 60 kB Word document and turn it into a 5 MB behemoth... and this is when set to compress the PDF!
    I mean, a 5 MB jpeg is about 40-50 MB uncompressed which, for an 8.5x11" page is 300 dpi at 16 bit RGB color, 400 dpi 8 bit color.
    I've just never come accross anything like this and am totally clueless what is going on.
    Thanks for any insights.
    Mace

  • Different Compressor settings for export in FCPX = same result

    HI
    For test purposes for a movie screener to upload, I've been changing the settings of a custom Compressor setting I imported into FCPX but seem to get the same result size-wise whether its 1- or 2-pass.
    Do I need to delete the icon in FCPX and then re-choose it?
    best
    elmer

    The Compressor quality slider (Least>>>>Best) in the dialog is activated when the Bit Rate is set to Automatic. If a bit rate is entered in the Restrict to field, the slider is inactive. Also, the slider is not available at all for some codecs.
    One way or the other, one chooses a bit rate and that times the duration is the total bit budget that compressor figures out how to allocate on the first pass. Complex scenes get higher rates and less complex get lower rates.
    Russ

  • Result Cache Oracle 11gR2

    Hi all,
    Currently I have some problems with result cache, or maybe I don't understand this feature properly.
    I'm trying to switch off the bypass mode, and I'm not able to do this:
    SQL> select dbms_result_cache.status from dual;
    STATUS
    BYPASS
    SQL> exec dbms_result_cache.bypass(FALSE);
    PL/SQL procedure successfully completed.
    SQL> select dbms_result_cache.status from dual;
    STATUS
    BYPASS
    SQL> show parameter result
    NAME TYPE VALUE
    client_result_cache_lag big integer 3000
    client_result_cache_size big integer 0
    result_cache_max_result integer 5
    result_cache_max_size big integer 0
    result_cache_mode string MANUAL
    result_cache_remote_expiration integer 0
    SQL> alter system set result_cache_max_size=2M scope=both;
    System altered.
    SQL> show parameter result
    NAME TYPE VALUE
    client_result_cache_lag big integer 3000
    client_result_cache_size big integer 0
    result_cache_max_result integer 5
    result_cache_max_size big integer 0
    result_cache_mode string MANUAL
    result_cache_remote_expiration integer 0
    SQL> alter system set result_cache_max_size=2M scope=spfile;
    System altered.
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 523108352 bytes
    Fixed Size 1337632 bytes
    Variable Size 465569504 bytes
    Database Buffers 50331648 bytes
    Redo Buffers 5869568 bytes
    Database mounted.
    Database opened.
    SQL> show parameter result
    NAME TYPE VALUE
    client_result_cache_lag big integer 3000
    client_result_cache_size big integer 1G
    result_cache_max_result integer 5
    result_cache_max_size big integer 0
    result_cache_mode string MANUAL
    result_cache_remote_expiration integer 0
    SQL> select dbms_result_cache.status from dual;
    STATUS
    BYPASS
    SQL> set serveroutput on;
    SQL> exec dbms_result_cache.memory_report
    R e s u l t C a c h e M e m o r y R e p o r t
    [Parameters]
    Block Size = 1K bytes
    Maximum Cache Size = 0 bytes (0 blocks)
    Maximum Result Size = 0 bytes (0 blocks)
    [Memory]
    Total Memory = 9440 bytes [0.004% of the Shared Pool]
    ... Fixed Memory = 9440 bytes [0.004% of the Shared Pool]
    ... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]
    PL/SQL procedure successfully completed.
    SQL>
    Is there something what I missed?
    Thanks for any advices.
    Regards,
    Piotr

    A little bit more theory:
    The result cache resides in the Shared pool. Because the database was managing its memory automatically (parameter memory_target was set) and no one was working with this environment, the database didn't allocate yet enough space for shared pool to give some space for result cache. I had to set min value for shared pool to force the database to automatically allocate some space to shared pool during start up and never make shared pool smaller than specified value.
    Hope that this explanation help other people :)
    Regards,
    Petrus

  • Difference in partition size on different servers

    Hi!
    I copied my cube from one server to the other with import DB. When I processed my cube on the other server with new data, the size of one partition changed from 1.8G to 12G. When I check the original tables, their sizes are almost the same, and number of rows
    too. Then I copied Db and the cube from the first server to the second one - same result - size of partion grew up abnormally.
    What can be the problem and how can I minimize the size of my partition?

    Hi Tatyanaa,
    Microsoft SQL Server Analysis Services provides several standard storage configurations for storage modes and caching options. Which type of storage are you used in the second server?
    MOLAP - The MOLAP storage mode causes the aggregations of the partition and a copy of its source data to be stored in
    a multidimensional structure in Analysis Services when the partition is processed.
    ROLAP - The ROLAP storage mode causes the aggregations of the partition to be stored in indexed views in the relational
    database that was specified in the partition's data source.
    So if the partition storage are set to ROLAP in first server and MOLAP in the second server, then you may encounter this issue since the source data is stored in the Analysis Services server. For the detail information about it, please refer to the link
    below.
    http://www.sql-server-performance.com/2013/ssas-storage-modes/
    http://sqlblog.com/blogs/jorg_klein/archive/2008/03/27/ssas-molap-rolap-and-holap-storage-types.aspx
    http://msdn.microsoft.com/en-IN/library/ms175646.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

  • LessFilter and  ReflectionExtractor API giving incorrect results

    I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
    Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
    We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
    Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
    Code:
    1) Test by Date: returns incorrect results
    public void testbyDate(final Date startDate) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
    LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
    final Filter lessFilter = new LessFilter(extractor, startDate);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    2) Test by milliseconds: returns correct results
    public void testbyTime(final Long time) throws IOException {
    final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
    LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
    final Filter lessFilter = new LessFilter(extractor, time);
    final Set results = CACHE.entrySet(lessFilter);
    LOGGER.debug("Fetched Records:" + results.size());
    assert results.isEmpty();
    }

    Hi Harvy,
    Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
    Please have a look at below code.
    *1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
    import java.io.IOException;
    import java.util.Date;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * The Class AbstractCacheDTO.
    * @param <E>
    *            the element type
    * @author apanwa
    public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
        /** The Constant IDENTIFIER. */
        private static final int IDENTIFIER = 0;
        /** The Constant CREATION_TIME. */
        private static final int CREATION_TIME = 1;
        /** The Constant MODIFICATION_TIME. */
        private static final int MODIFICATION_TIME = 2;
        /** The version number of cache DTO implementation **/
        private static final int VERSION = 11662;
        /** The id. */
        private E id;
        /** The creation time. */
        private Date creationTime = new Date();
        /** The modification time. */
        private Date modificationTime;
         * Gets the id.
         * @return the id
        public E getId() {
            return id;
         * Sets the id.
         * @param id
         *            the new id
        public void setId(final E id) {
            this.id = id;
         * Gets the creation time.
         * @return the creation time
        public Date getCreationTime() {
            return creationTime;
         * Gets the modification time.
         * @return the modification time
        public Date getModificationTime() {
            return modificationTime;
         * Sets the modification time.
         * @param modificationTime
         *            the new modification time
        public void setModificationTime(final Date modificationTime) {
            this.modificationTime = modificationTime;
         * Read external.
         * @param reader
         *            the reader
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            id = (E) reader.readObject(IDENTIFIER);
            creationTime = reader.readDate(CREATION_TIME);
            modificationTime = reader.readDate(MODIFICATION_TIME);
         * Write external.
         * @param writer
         *            the writer
         * @throws IOException
         *             Signals that an I/O exception has occurred.
         * @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            writer.writeObject(IDENTIFIER, id);
            writer.writeDateTime(CREATION_TIME, creationTime);
            writer.writeDateTime(MODIFICATION_TIME, modificationTime);
        @Override
        public int getImplVersion() {
            return VERSION;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
        private Long timeinMillis;
        private static final int TIME_MILLIS_ID = 3;
        @Override
        public void readExternal(final PofReader reader) throws IOException {
            super.readExternal(reader);
            timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
        @Override
        public void writeExternal(final PofWriter writer) throws IOException {
            super.writeExternal(writer);
            writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
         * @return the timeinMillis
        public Long getTimeinMillis() {
            return timeinMillis;
         * @param timeinMillis
         *            the timeinMillis to set
        public void setTimeinMillis(final Long timeinMillis) {
            this.timeinMillis = timeinMillis;
    }*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
    import java.io.IOException;
    import org.apache.commons.lang.StringUtils;
    import com.tangosol.io.AbstractEvolvable;
    import com.tangosol.io.pof.EvolvablePortableObject;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    * @author nkhatw
    public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
        private String recordId;
        /** The Constant recordId. */
        private static final int RECORD_ID = 0;
        /** The version number of cache DTO implementation *. */
        private static final int VERSION = 11660;
        @Override
        public void readExternal(final PofReader pofreader) throws IOException {
            recordId = pofreader.readString(RECORD_ID);
        @Override
        public void writeExternal(final PofWriter pofwriter) throws IOException {
            pofwriter.writeString(RECORD_ID, recordId);
        @Override
        public int getImplVersion() {
            return VERSION;
        @Override
        public boolean equals(final Object object) {
            if (object instanceof TestIdentifier) {
                final TestIdentifier id = (TestIdentifier) object;
                return StringUtils.equals(recordId, id.getRecordId());
            } else {
                return false;
         * @see java.lang.Object#hashCode()
        @Override
        public int hashCode() {
            return recordId.hashCode();
         * @return the recordId
        public String getRecordId() {
            return recordId;
         * @param recordId
         *            the recordId to set
        public void setRecordId(final String recordId) {
            this.recordId = recordId;
    }*3) Use Case*
    We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
    import java.io.IOException;
    import java.util.Collection;
    import java.util.Date;
    import java.util.Map;
    import java.util.Set;
    import org.apache.log4j.Logger;
    import com.ladbrokes.dtos.cache.TestDTO;
    import com.ladbrokes.dtos.cache.TestIdentifier;
    import com.cache.services.CacheService;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.LessFilter;
    * @author nkhatw
    public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
        private static final String TEST_CACHE = "testcache";
        private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
        private static final Logger LOGGER = Logger.getLogger(TestService.class);
         * Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
         * @throws IOException
        public void init() throws IOException {
            for (int i = 0; i < 30; i++) {
                final TestDTO dto = new TestDTO();
                final Date modTime = new Date();
                dto.setModificationTime(modTime);
                final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
                dto.setTimeinMillis(timeInMillis);
                final TestIdentifier testId = new TestIdentifier();
                testId.setRecordId(String.valueOf(i));
                dto.setId(testId);
                final CacheService testService = new TestService();
                testService.createOrUpdate(dto, null);
                LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
                    + timeInMillis);
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting time property
         * b) java.util.Date value to be compared with
         * 2) Verify extracted entryset
         * @throws IOException
        public void testbyDate(final Date startDate) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
            LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
            final Filter lessFilter = new LessFilter(extractor, startDate);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
         * 1) Fetch Data from cache based on LessFilter with args:
         * a) ValueExtractor: extracting "time in millis  property"
         * b) java.Long value to be compared with
         * 2) Verify extracted entryset
        public void testbyTime(final Long time) throws IOException {
            final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
            LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
            final Filter lessFilter = new LessFilter(extractor, time);
            final Set results = CACHE.entrySet(lessFilter);
            LOGGER.debug("Fetched Records:" + results.size());
            assert results.isEmpty();
        @Override
        public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
            CACHE.put(testDTO.getId(), testDTO);
        @Override
        public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
            // YTODO Auto-generated method stub
        @Override
        public <G>G read(final TestIdentifier arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
            // YTODO Auto-generated method stub
            return null;
        @Override
        public void remove(final TestDTO arg0) throws IOException {
            // YTODO Auto-generated method stub
    Use Case execution Results:
    "testbyTime" method returns correct results.
    However, "testbyDate" method gives random and incorrect results.

  • Problem in user-specific search results

    Hi All,
    I have created an index which is indexing the docs properly. Now, I written a code to show user-specific docs using index management API's..
    I have used an EP5 current user.
    ISearchResultList results = session.getSearchResults(1,session.getNumberResultKeys());
    In the following stmt, I am getting the proper value for session.getNumberResultKeys(), but still ISearchResultList size always is zero.
    Is there anything I am missin in code or its a configuration problem??
    I have full control permissions to index as well as to repository folder..Can someone plz get me out of this problem..Its urgent
    Thanks & Regards,
    Udit

    Hi Udit,
    I got the solution for the problem. Here is the code I am using and its working great. Just give a try
    com.sapportals.portal.security.usermanagement.IUser user = null;
    user = (com.sapportals.portal.security.usermanagement.IUser)request.getUser().getUser();
    response.write("<html><head><title>Search</title></head><body>");
    ResourceContext c = new ResourceContext(user);
    try {
    IIndexService indexService =
              (IIndexService) ResourceFactory
         .getInstance()
         .getServiceFactory()
         .getService(IServiceTypesConst.INDEX_SERVICE);
         SearchQueryListBuilder sqb = new SearchQueryListBuilder();
         sqb.setSearchTerm("valero");
         IQueryEntryList qel = sqb.buildSearchQueryList();
         // get an instance of federated search
    IFederatedSearch federatedSearch =
              (IFederatedSearch) indexService.getObjectInstance(
              IWcmIndexConst.FEDERATED_SEARCH_INSTANCE);
         List indexList = new ArrayList();
         String index = null;
         if (index != null && index.length() > 0) {
         // take a specified index from index= parameter
         indexList.add(indexService.getIndex(index));
         } else {
         // take all available indexes
         indexList = indexService.getActiveIndexes();
         // it is recommended to use a search session object
         // for searching execution
         ISearchSession session = null;
         if (session == null)
         session = federatedSearch.searchWithSession(qel, indexList, c);
         this.renderResultHeader(response, session, indexList);
         // get all results from the search session
         ISearchResultList results =     session.getSearchResults(1, session.getTotalNumberResultKeys());
         response.write(" from session => " + results.size() + " and size is : " + results.toString());
    Here it will print the size of the ISearchResultList, the problem is the user we are passing to ResourceContext is not proper.

  • Adjusting size of photo prior to emailing it

    Whenever I send an email with a photo attached (from iPhoto) it is huge.  I've checked Help and can't find anthing referring to how to resize a photo prior to attaching to an email.  CH

    To send pic via Comcast email:
    Per RatVega: use Preview to change size of photo   
    Preview > Tools > Adjust size
    Change size by changing height & width > check Resulting Size box
    at bottom of box. Adjust numbers to make file the size you want.
    Save As...Name & put file where it's easy to find, such as on Desktop
    Go to Comcast mail
    Click New > Set up new email.
    Click Attach at top of email window > Attach File window opens
    Click on Choose File > Finder window opens
    Click on the pic file you set up in Preview > Click Choose button
    This takes you back to the Comcast Attach File window.
    Check that your files there, to the right of the Choose File button
    Click Attach at bottom of window
    Wait a couple seconds > email will open with file attached.
    This isn't quite as tedious as it looks. It does get the pic sized & in Comcast.
    Good luck.

Maybe you are looking for

  • Ho to connect old ipod to new bought imac/itunes

    i just bought an new imac and want to transfere my ipod music to new imac/itunes. My old ipod was connected with windows computer. imac intel 17"   Mac OS X (10.4.5)   intel processor

  • Is iWork's 09 compatible with Maverick OS

    I recently upgraded to Maverick OS 10.9.  Now my iWorks 09 program does not work. I get an error message when trying to launch.  Anyone know a fix?

  • Adding a Custom Image to a Style

    Yet another dumb question! Is it possible to add a customized image to a style? For example, I have a style named Caution and I want this style to use a specific image. If this is possible, please provide details for implementing it. If it is not pos

  • Organize PDF docs on iPad?

    I have sent several PDF docs to my iPad, but I am unable to create folders to organize them into specific categories. Is it possible to do so?

  • StateChangeEvent and events in general

    Hello All, I'm still fairly new to flex and despite some success I'm still having lots of trouble with Events. my application has 2 states. I want to catch an event which occurs when the state changes from one state to the other. What I've tried to d