Is this the best approach to reduce the memory ??

Hi -
I have been given a task to reduce the HEAP memory so that the system can support more number of users. I have used various suggestions given in this forum to find out the size of the object in memory. I have reached to a point that where i think i got an approx size of the object in memory.(not 100%)
I basically have some objects of some other class which are created when this object is created . The intent was to initialize the nested objects once and use them in the main object. I saw some significant difference reduction in size of the object when i create these objects local to the methods which use it.
Before moving the objects to method level
Class A {
    Object b = new Object();
    Object c = new Object();
    Object d = new Object();
     public void method1 () {
         b.someMethod();
     public void method2 () {
         b.someMethod();
     public void method3 () {
         c.someMethod();
     public void method4 () {
         c.someMethod();
     public void method5 () {
         d.someMethod();
     public void method6 () {
         d.someMethod();
After moving the objects to method level
Class A {
     public void method1 () {
       Object b = new Object();
         b.someMethod();
     public void method2 () {
        Object b = new Object();
         b.someMethod();
     public void method3 () {
       Object c = new Object();
         c.someMethod();
     public void method4 () {
      Object c = new Object();
         c.someMethod();
     public void method5 () {
        Object d = new Object();
         d.someMethod();
     public void method6 () {
        Object d = new Object();
         d.someMethod();
}Note : This object remains in the http session atleast 2 hrs. I cannot change the session time out.
Is this the better approach to reduce the heap size? What are the side effects of creating all objects in the local methods which will be on stack?
Thanks in advance

The point is not that the objects are on the stack - they aren't, all objects are in heap, but that they have a much shorter life. They'll become unreachable as soon as the method exits, rather than surviving until the session times out. And the garbage collector will probably recycle them pretty promptly, because they remain in "Eden space".
(In future versions of the JVM Sun is hoping to use "escape analysis" to reclaim such objects even faster).
Of course some objects might have a significant creation overhead, in which case you might want to consider creating some kind of pool of them from which one could get borrowed for the duration of the call. With simple objects, though, the overheads of pooling are likely to be higher.
Are these objects modified during use? If not then you might simply be able to create one instance of each for the whole application, and simply change the fields in the original class to static. The decision depends on thread safety.

Similar Messages

  • I opened up a kingston memory stick last week on my computer and it worked fine.  This the memory stick will not open in my computer.  How do I find the memory stick if the icon does not come up on my desktop screen?

    I cannot seem to open back up my memory stick.  I dragged the "icon" to the trash bucket before I pulled the stick out.  The memory stick icon no longer shows up on my computer.  It also will not open on any other computer.  Is there anyway to find the memory stick when its plugged in with out having the icon show up?

    You might try the obvious, although I have seen memory sticks fail.  Good luck.
    1)Remove the memory stick
    2) Shut the computer down
    3) Restart the computer
    4) Login
    5) Re-insert the memory stick

  • Best approach to reduce size of 400GB PO (yikes!)

    Hi fellow Groupwise gurus,
    Am taking a position with a new company that has just informed me they have a 400GB PO message store. I informed them that, uh yea, this is a bit of a problem. So, I am starting to consider best way(s) to deal with this. They are on NetWare 6.5 with Groupwise 7.03. My inclination is to take them to GW 8 because we can do an in place upgrade leaving GW on NetWare for the time being (they can't do the 2012 upgrade and Linux migration, they say, until next year). I would rather have them on GW8 for stability reasons and better support, 7.03 being so old now. I know we could (and perhaps should) create another PO and move users to reduce the size of the main PO. Not sure yet, however, if we have hardware resources to setup another server (although I guess we could just load a 2nd instance of the POA at, say, Port 1678). Retention is obviously a problem here as it is likely that nobody has deleted or purged anything for many, many years (and there are quite possibly less than 50 users, I believe, on this PO!). I would love to get them on Retain, but again that is not an option until next year. We could have them start archiving using the Groupwise client archiving feature, but that does create another set of files outside of the PO that need to be backed up reliably. And finally, we could have people delete but I am not sure that is a viable option. Any suggestions? FYI - they are having to restart the POA on a regular basis due to POA instability and unresponsiveness. Plus, backups take forever, of course. Thanks.
    Don

    On 04.05.2012 02:26, djhess wrote:
    >
    > Hi fellow Groupwise gurus,
    > Am taking a position with a new company that has just informed me they
    > have a 400GB PO message store. I informed them that, uh yea, this is a
    > bit of a problem.
    And that is a problem why?
    The biggest PO I support currently is 2,5TB. Runs without a hitch.
    > So, I am starting to consider best way(s) to deal
    > with this.
    Leave it alone, and reconsider your stance what is a "problem" for
    groupwise? ;)
    > Not sure yet, however, if we have hardware
    > resources to setup another server (although I guess we could just load a
    > 2nd instance of the POA at, say, Port 1678).
    And the latter, running on the same server, would only further increase
    the total size of the data, but else not have any positive effect.
    > Any suggestions? FYI - they are having to
    > restart the POA on a regular basis due to POA instability and
    > unresponsiveness.
    Which is almost certainly unrelated to it's size. Before I'd make any
    changes, I would want to *know* the cause of those problems. Post some
    details of what exactly happens, and we'll be able to help with that.
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de

  • What is the best approach to take daily backup of application from CQ5 Server ?

    Hello,
    How to maintain daily backup to maintain the data from server.
    What is the best approach.
    Regards,
    Satish Sapate.

    Linking shared from ryan should give enough information. 
    If case backing up large repository you may know Data Store store holds large binaries and are only stored once. To reduce the backup time remove the datastore from the backup by following [1] (CQ 5.3 example)
    [1] In order to remove the datastore from the backup you will need to do the following:
    Assuming your repository is under /website/crx/repository and you want to move your datastore to /website/crx/datastore
        stop the crx instance
        mv /website/crx/repository/shared/repository/datastore /website/crx/
        Then modify repository.xml by adding the new path configuration to the DataStore element.
    Before:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="path" value="/website/crx/datastore"/>
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After doing this then you can safely run separate backups on the datastore while the system is running without affecting performance very much.
    Following our example, you could use rsync to backup the datstore:
    rsync --av --ignore-existing /website/crx/datastore /website/backup/datastore

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best Trategy to reduce the Database Size

    Hi Everyone,
    In our Client's Landscape SAP systems have been upgraded to newer versions whereas our client want
    one copy of older Production systems (one copy to retain)
    1) SAP R/3 4.6 C system  (database size of this system is approx 2TB)
    2) SAP BW 3.0 (database size of this system is approx 2TB)
    Now CLient wants us to reduce the database size via re-organization because Archiving of IDOCs & Links we have already done
    Client has recommended for :
    1) Oracle Export/Import: Only Oracle DBA can do (ignore this one)
    2) Database Reorganization : We have tried Reoragnization via BRtools but found very tedious 9 (ignore this one)
    3) SAP Export/Import : Via this way we want to reduce the database size
    Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    Via SAP Export/Import how much approx how much database size will be reduced
    Thanks & Regards
    Deepak Gosain

    Hi,
    >Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    >of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    The only realistic way to know is to do a system copy of the production system on a testbed system and to test the Database Export.
    If you really want to decrease the database size you will have to archive a lot more than the IDOC archiving object.
    Regards,
    Olivier

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What is the Best Approach to Display the Top most Accessed linksof a portal

    HI Experts,
    I have req of Displaying the Top Most accessed links of a portal in EP.
    From all the links in the portal, Need to Display the Top Ten  Most Accessed Links BY Users in an Iveiw.
    1) When the user clicks on the link, it is stored in a trace file.
    2)No of clicks in the Portal are of high Volume.25000+
    3)I have created the Java program to seggregete the data.but it uses the buffered input Output operations in the memory. it may memory problems in Production System.
    4) can we use BI method to handle the Large Amount of data??
    5) do we have any Trace File Extractor kind of Tool to seggregate the data?
    Need u r Suggestion to Which Approach i Should Take up BI or EP(Potal Components)??
    Pls Suggest u r Veiws on above Req's..
    Thank You.,
    Mahesh Narkuda

    Hi Mahesh,
    Using the Portal Activity Report (PAR) will allow you to collect the following data: visited pages and iviews and who visited those pages and iviews.
    the data will be collected and flushed to your Database.
    The PAR has it's out of the box reports that you can use for your needs.
    If you decided to use the Activity Data Collector (ADC) the data will be collected and saved to text based files on your operating system.
    The ADC doesn't have an out of the box report system but it is able to collect more types of data and it is more configurable.
    Hope this helps.
    Best Regards,
    Saar Dagan
    Edited by: Saar Dagan on Feb 27, 2012 2:16 PM

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best tool to reduce the network bandwidth for WEBI reports

    Hi Experts,
    We have a central BO XI server installed in Head Office. Few users are needed to connet to Head Office from their place (Regions) to access the WEBI reports.
    For this we have two options to refresh the report.
              1. BO InfoView
              2. Web Intelligence Rich Client
    My question is , which is the best way to reduce the internet bandwidth. Since my users are on remote location and doesn't have good link, which one is the best option to reduce the bandwidth consumtion.
    Regards,
    Suresh

    If you're launching Webi Rich Client in 3-tier (ZABO) mode there should be no difference from Infoview refresh, as WRC will utilize same webi report servers for refresh as Infoview does.
    If you use 2-tier mode with WRC, then refresh will be local, using local connection to the reporting DB.
    I'd say Infoview will be less bandwith intensive and you will be sending only structured and prepared data tot he client browser, not the rough data report needs.
    To further decrease bandwith, you can design your server infrastructure to have regional processing, that way most communicatyions with be local and requests go to central location only for main data...

Maybe you are looking for

  • How can i add to a listBox items near in a new column other items ?

    The items on the right i changed the property of the listBox1 righttoleft to Yes. My problem is i want to build a new column for the number so each number i'm adding will be next on the left to the belong item string. And not under it like now. 1. Ma

  • Looking for an App with 'Post-It Note'-Type Capability

    I facilitate multi-day workshops, and frequently have to lug overstuffed four-inch binders around with me. Sometimes more than one. The challenge here is, I have to modify the pages each time I facilitate, based on client specifics, new learnings/res

  • Preview not saving check boxes in PDF forms

    For some reason 10.6's preview will not save check boxes in PDF forms, you can click them but as soon as you save them they disappear.

  • Line Graph Error

    Hi, I have a query like this below which will return 25 rows when i run this in a SQL work shop when i execute this in Line graph it only shows data for 15 rows remaining 10 rows are missing please suggest me what would be the mistake Here no dates a

  • Other in itunes summary on ipod touch

    What is the OTHER category in the ipod touch summary. I have 4 GB of memory used here and do not know what to delete so I can down load an app?