In general how is T series performance for data warehousing databases ~1 TB

Hi,
I am planning to use a Sun T3-2 for a data warehouse app. Oracle 11g around a 1 TB worth of data and batch jobs and ~ 200 concurrent users.
We are using HP UX Superdome server now ( node partition of 8 physical CPU's 750 Mhz ).
I know that the T series servers are good for highly multi threaded application. Can I consider the database 11g as a highly multithreaded app ?
How about the batch jobs which run long SQL jobs, I don't think those can be considered multithreaded.
Anyone is using these T series srevers for databases around a TB size ?
Please suggest.
Thanks ..

The T3-2 can do much more work then the old Superdome, but the T3-2 won't run single threaded queires significantly faster then the old Superdome, it will just be able to run about 100x more of them.
If you care most about price/performance look at the X4800 or X4450 servers.
If you care most about availablity and performance look at the M4000/M5000 servers
My experience with coolthreads servers for Oracle DB is be 100% sure what you're doing isn't single thread dependent before deploying.

Similar Messages

  • Can any one please tell me how to write labview program for data logging in electric motor bike.

    Can any one please tell me how to write labview program for data logging in electric motor bike. I am going to use CompactRIO for getting wide range of data from various sensors in bike. I need to write labview program for data logging of temperature, voltage and speed of the bike. Can any one help me?

    Yes, we can.   
    I think the best place for you to start for this is the NI Developer Zone.  I recommend beginning with these tutorials I found by searching on "data log rio".  There were more than just these few that might be relevant to your project but I'll leave that for you to decide.
    NI Compact RIO Setup and Services ->  http://zone.ni.com/devzone/cda/tut/p/id/11394
    Getting Started with CompactRIO - Logging Data to Disk  ->  http://zone.ni.com/devzone/cda/tut/p/id/11198
    Getting Started with CompactRIO - Performing Basic Control ->  http://zone.ni.com/devzone/cda/tut/p/id/11197
    These will probably give you links to more topics/tutorials/examples that can help you design and implement your target system.
    Jason
    Wire Warrior
    Behold the power of LabVIEW as my army of Roomba minions streaks across the floor!

  • Hello !  pls give some ti[ps how to use bapi's for data uploading?

    hello !
      pls give some ti[ps how to use bapi's for data uploading?
    regards,
    Arjun

    Hi,
    See the below report extract:
    where it_data is having uploaded data.
    LOOP AT<b> it_data</b> INTO wa_data.
        line_count = sy-tabix.
        "Date Validation
        CONCATENATE wa_data-uplft_date4(4) wa_data-uplft_date2(2) wa_data-uplft_date+0(2)
        INTO wa_data-uplft_date.
        "READ TABLE it_ekko INTO wa_ekko WITH KEY lifnr = wa_data-vendor.
        LOOP AT it_ekko_temp INTO wa_ekko_temp WHERE lifnr = wa_data-vendor.
          IF wa_ekko_temp-kdatb <= wa_data-uplft_date AND wa_ekko_temp-kdate >= wa_data-uplft_date.
            MOVE-CORRESPONDING wa_ekko_temp TO wa_ekko.
            APPEND wa_ekko TO it_ekko.
          ENDIF.
        ENDLOOP.
        "IF sy-subrc = 0 AND wa_ekko-kdatb <= wa_data-uplft_date AND wa_ekko-kdate >= wa_data-uplft_date.
        LOOP AT it_ekko INTO wa_ekko.
          wa_data_header-pstng_date = wa_data-uplft_date.
          wa_data_header-doc_date = sy-datum.
          wa_data_header-bill_of_lading = wa_data-bill_of_lad.
          wa_data_header-ref_doc_no = wa_data-del_no.
          CONCATENATE wa_data-header_text1 '-'
                      wa_data-header_text2 '-'
                      wa_data-header_text3 '-'
                      wa_data-header_text4
                      into wa_data_header-HEADER_TXT.
          IF wa_data-indicator = 'Y'.
            wa_data_item-material = '000000000000200568'.
          ELSE.
            wa_data_item-material = '000000000000200566'.
          ENDIF.
          LOOP AT it_ekpo INTO wa_ekpo WHERE ebeln = wa_ekko-ebeln AND matnr = wa_data_item-material.
            "Collect Item Level Data
            wa_data_item-plant = '1000'.
            wa_data_item-stge_loc = '1001'.
            wa_data_item-move_type = '101'.
            wa_data_item-vendor = wa_data-vendor.
            wa_data-qnty = wa_data-qnty / 1000.
            wa_data_item-entry_qnt = wa_data-qnty.
            wa_data_item-po_pr_qnt = wa_data-qnty.
            wa_data_item-entry_uom = 'KL'.
            wa_data_item-entry_uom_iso = 'KL'.
            wa_data_item-orderpr_un = 'KL'.
            wa_data_item-orderpr_un_iso = 'KL'.
            wa_data_item-no_more_gr = 'X'.
            wa_data_item-po_number = wa_ekpo-ebeln.
            wa_data_item-po_item = wa_ekpo-ebelp.
            wa_data_item-unload_pt = wa_data-unload_pt.
            wa_data_item-mvt_ind = 'B'.
            APPEND wa_data_item TO it_data_item.
            CLEAR wa_data_item.
          ENDLOOP.
          CALL FUNCTION 'BAPI_GOODSMVT_CREATE'
            EXPORTING
              goodsmvt_header = wa_data_header
              goodsmvt_code   = goodsmvt_code
              testrun         = 'X'
            TABLES
              goodsmvt_item   = it_data_item
              return          = return.
          READ TABLE return INTO wa_return WITH KEY type = 'S'.
          IF sy-subrc <> 0.
            DESCRIBE TABLE return LINES sy-tfill.
            IF sy-tfill = 0.
              CALL FUNCTION <b>'BAPI_GOODSMVT_CREATE'</b>   
            EXPORTING
                  goodsmvt_header = wa_data_header
                  goodsmvt_code   = goodsmvt_code
                  testrun         = ' '
                TABLES
                  goodsmvt_item   = it_data_item
                  return          = return.
              CALL FUNCTION <b>'BAPI_TRANSACTION_COMMIT'</b>
               EXPORTING
                 WAIT          = 'X'
              IMPORTING
                RETURN        =
            ENDIF.
          ENDIF.
          LOOP AT return INTO wa_return.
            WRITE: 'Messsage TYPE  ', wa_return-type,
                  /,'ID  ', wa_return-id,
                  /, 'Number  ', wa_return-number,
                  /, 'Message  ', wa_return-message,
                  /, 'Long Text  ', wa_return-message_v1,
                                    wa_return-message_v2,
                                    wa_return-message_v3,
                                    wa_return-message_v4,
                 /, 'Failed at line', line_count.
          ENDLOOP.
          CLEAR: wa_ekko, wa_ekpo, wa_data, it_data_item[], wa_data_header.
        ENDLOOP.
    Reward if useful!

  • How the transactional replication work for simple recovery database (looking for some internal concept)

    How the transactional replication work for simple recovery database (looking for some internal concept)
    Rahul

    There seems to be a new myth going around recently. I’ve had at least three people tell me, in the last month, that SQL’s transactional replication requires the database to be in full recovery.
    This is complete fabrication. Replication (SQL native replication, that is) can work with the databases in any of the recovery models.
    Transactional replication does involve the transaction log, as that’s where it picks up changes from. The log reader scans over the transaction log looking for log records marked for replication, copies those to the distribution database and then marks them
    as replicated. When the checkpoint (for simple recovery) or log backup (for full or bulk logged) occur, the log will only be truncated up to the oldest inactive, replicated transaction.
    Because transactional replication has its own way of ensuring log records aren’t discarded before been picked up by the log reader, there’s no requirement for a specific recovery model.
    Refer this link
    http://sqlinthewild.co.za/index.php/2008/12/05/a-new-sql-myth/
    --Prashanth

  • Scope for data warehousing ETL Tool

    Hi all
    can anybody explain scope for data warehousing ETL Tool
    for oracle developer in future this is ok or..
    regards
    Message was edited by:
    174313

    What exactly is your question?
    The scope of using an ETL tool would be setting up and maintaining datawarehouses and building ETL processes to populate these datawarehouses.
    A tool is generally preferred over hand coding because tools allow better maintenance, shorter development cycles etc.
    Oracle has a pretty good ETL tool, called Oracle Warehouse Builder. It's not the best tool available, but if you compare price/ functionality I would say in most cases it will do.
    If your question is if it's a wise pick to master OWB or any ETL tool my answer would be a clear YES! Datawarehouses and BI are becoming more important every day, their use gets broader everyday.
    If your question is if an investment in OWB is a wise investment for the future I would answer a clear YES again. It's increddilbe to see what progress Oracle made with the tool, coming from a 'laughing stock' postition, regarded completely immature good for nothing tool, to where they are now, with 10.2. Regarded as one of the leaders by Gartner.
    Oracle has recognized (a long while ago) that ETL is the bread and butter of the future and invests in a good quality tool for accomplishing this.
    I hope this answes your question, if not, please try to specify more clearly.
    Regards,
    Toin.

  • How to find the structure fields data in database tables?

    how to find the structure fields data in database tables?

    Your question doesn't appear to be Web Dynpro ABAP related. Please only post questions in this forum if they are directly Web Dynpro ABAP related.  There are several other more general ABAP related forums.

  • Best  Course  for Data Warehousing

    Hi,
                I am planning to join data warehousing course .I heard there is lot courses in data warehousing .
    Data warehousing with ETL tools or
    Data warehousing with Crystal Reports or
    Data warehousing with Business object or
    Data warehousing with Informatica or
    Data warehousing with Bo-Webel or
    Data warehousing with Cognos or
    Data warehousing with Data Stage or
    Data warehousing with MSTR or
    Data warehousing with Erwin or
    Data warehousing with oracle.
    Please suggest me which best to choose and  which have more scope because I  don't know  the ABC of data warehousing  but I have some experience in oracle.
    Is it must that I need work experience in data warehousing  then only can get a job ?Please tell me which is the best book for data warehousing which should start from scratch.  Please  give your suggestions about to my queries.
    Thanks & Regards,
    Raji

    Hi,
    Basically Datawarehouse is a concept.To develop DW , we need two tools mainly. One is ETL tool and other one is Reporting tool .
    The few famous ETL tools are
    Informatica
    Data Stage
    Few famous Reporting tools are
    Crystal Reports
    Cognos
    Business object
    As a DW developer you should aware of atleat one ETL tool and atleat one Reporting tool.The combination is your choice.It better to finout the best combination in point of job market , and then learn them.
    Erwin is Datamodel tool. It can aslo be used in DW implementation. You have already have experience on ORacle,So my adivce is go for Data warehousing with oracle or Data warehousing with Informatica .And learn one reporting tool.I donot is there any reporting tool available from ORACLE.
    My suggestion on books.
    Fundamentals of Datawarehouse by PaulRaj Ponnai and
    Datawarehouse toolkit.
    http://www.inmoncif.com/about.html is one of the best site for Datawarehouse.
    With rgds,
    Anil Kumar Sharma .P
    Assigning points is the way to say thanks in SDN site.

  • Webcast : Sun Oracle Database Machine for Data Warehousing  -Sep 30 noon ET

    Sun Oracle Database Machine for Data Warehousing
    Jean Pierre Dijcks - Data Warehousing Product Mgmt, Oracle
    https://conference.oracle.com/imtapp/app/cmn_jm_hub.uix?mID=158101510
    On September 15 Oracle announced the second generation of its Database Machine, making an already strong data warehousing product significantly stronger. The new version runs on Sun hardware and offers important new features. Available in full rack, half rack, quarter rack, and basic unit configurations, the Sun Oracle Database Machine can add value at many data warehouse size levels.
    The Sun Oracle Database Machine runs on Oracle Database 11g Release 2 and has new features such as:
    Smart Flash Cache memory for ultra-fast IO - Reaches 50GB/second on a full rack system (not even counting gains from compression)
    Exadata Hybrid Columnar Compression - Maximizes data capacity and reduces scan times: think 500GB/second IO
    Offloaded Data Mining Scoring - Moves CPU-intensive operations from database servers to Exadata storage servers
    In-Memory Parallel Execution - Caches full tables in memory across nodes: foundation of new TPC-H world record
    There is plenty more we have not listed above, so come to this TechCast and learn about this major new product!
    Audio Dial-In: 888 967 2253 Audio Meeting ID: 572994 Audio Meeting Passcode: 334451
    Web Conference: https://conference.oracle.com/imtapp/app/cmn_jm_hub.uix?mID=158101510
    Compatibility Check: If you have not used Oracle's web conference system before, please ensure your system
    compatibility by going to https://conference.oracle.com/imtapp/app/nuf_sys.uix

    Is there any way, one could get this webcast to watch it offline?
    regards

  • How to improve the performance for integrating third party search engine

    hi,
    I have been working on integrating verity search engine with KM. the performance for retrieving search result totally depend on how many search result returned, for example if there is less than 10 records, it only takes 3 seconds, but if there are 200 records, it takes about 3 minutes, is it normal? anyway to improve it? Thanks!
    T.J.

    Thilo,
    thanks for the response, would you recommend some document for configuring KM cache service, I did change memory cache, and also dynamic web reposity, whatelse out there that I can change? right now, I have one instance(EP6.4 sp11) works well, it returns 200 records from Stellent within 6s. But when I put this KM global service on EP6.0 sp2 (our current system) it takes about 15s. I am not sure is this because of different EP version, or something else. I have tried my best to slim down SOAP component from Stellent. I don't think anything else can be done from that side anymore. before I changed the SOAP, it tooks about 60s. just wonder what else I can do on KM side to improve it performance? Thanks!
    T.J.

  • How to mesure/benchmark performance of a new database on new server?

    Hi there
    I have two oracle servers with following (same) details:
    RHEL 5.8 64-bit
    Oracle 10gR2 - 10.2.0.5.8
    ASM 10gR2 - 10.2.0.5.8
    Server A: RAM 32GB, 8 CPUs @ 3.00GHz
    Server B: RAM 128GB, vCPUs 16 cores
    Server A (physical server) already has a database A. Server B (on VMWare - yes, my client is moving all Oracle servers to VMware for whatever reason) is a new installation and new database B with exact same init params as databas A. I expdp the data from database A and impdp into database B.
    As per the hardware team, the hardware is better than the old server B. I did a very basic test to check if new DB performs better than that on physical server. Here is the results:
    I ran a simple query to create a new table. The original table (say, table_a) contains 1.7+ million rows and size is 2.2GB.
    create table test1
    as
    select * from table_a;
    It took 3:28mins on database B while it took only 1:55mins on database A. So the new database B seems to be performing poor (apparently). Then I looked at the explain plan (not sure if it means much because it s a very simple query) and here it is from both databases:
    Database A (physical server
    Plan
    SELECT STATEMENT ALL_ROWS
    Cost: 14,052  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    1 TABLE ACCESS FULL TABLE table_a
    Cost: 14,052  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    Database B (virtual server)
    Plan
    SELECT STATEMENT ALL_ROWS
    Cost: 59,844  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    1 TABLE ACCESS FULL TABLE table_a
    Cost: 59,844  Bytes: 2,161,302,003  Cardinality: 16,250,391 
    Questions:
    1. Why is the cost different? Should I "compute statistics" on database B (virtual server)?
    2. How to investigate further and find out reason for the time difference?
    3. What other benchmark test can I run to make sure that I have the right database configuration?
    Not sure if this is enough info - if not, please let me know what else should I provide.
    The team I have to hand-over this server is refusing to accept it by saying that it is slower than the existing one.
    Please advise!
    Best regards

    Wow... I am really thankful for everyone's input - this is really really appreciated!
    I will try what you all have suggested. In the meantime, I did some simple test on both databases and here are the results:
    Create table t1
      (1.7million rows)
    Create index on
      two columns on t1
    Create table t2
      500000 rows
    Create Index on
      two columns on t2
    Delete from t1
      (500000 rows)
    Insert into t1
      500000 rows
    Drop  table t2
    Drop table t1
    Database A
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    1st run
    00:01:55.78
    00:02:12.59
    00:00:03.06
    00:00:01.99
    00:01:25.56
    00:00:10.37
    00:00:00.15
    00:00:05.12
    2nd run
    00:01:56.27
    00:02:11.54
    00:00:02.89
    00:00:01.09
    00:01:18.39
    00:00:10.20
    00:00:00.17
    00:00:04.87
    3rd run
    00:01:56.71
    00:02:12.36
    00:00:03.14
    00:00:01.13
    00:01:22.97
    00:00:10.22
    00:00:00.15
    00:00:04.88
    Database B (VM)
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    hh:mm:ss.00
    1st run
    00:00:25:83
    00:03:54.60
    00:00:00.67
    00:00:01.43
    00:00:29.56
    00:00:09.75
    00:00:00.05
    00:00:01.10
    2nd run
    00:00:24.67
    00:03:05.81
    00:00:00:62
    00:00:01.10
    00:00:31.76
    00:00:08.59
    00:00:00.04
    00:00:00.59
    3rd run
    00:00:44.06
    00:03:12.91
    00:00:00.97
    00:00:01.62
    00:00:39.35
    00:00:08.90
    00:00:00.03
    00:00:00.61
    Now, the database on Server B (VMware) seems to be outperforming that on Server A except for the "Create index on two columns on t1" column.
    Any clues why index creation is consistently taking longer on the database B (on VM) as compared to that on database A (physical server)?
    @jgarry: I am not in a position to try SLOB (no doubt a good tool with lots of reputation) because it requires to create a new DB (which I cannot do on the existing server). I did try "HammerDB" but unfortunately it crashed on each attempt to test the load.

  • How to view custom performance counter data?

    I have created a new MVC application and have added Application Insights to the project. I modified the ApplicationInsights.config file to start collecting the performance counter for Memory\Page Faults/sec. How can I tell if this data is making it to App
    Insights? I can't see the data in the portal. When I add a chart in metrics explorer this counter does not exist under the Performance Counters.
    Where do I go to view this data? How can I determine if it's working or not? Also the documentation I'm finding on the subject appears to be outdated. Is MMA still used to capture this data? If not, what is used now? Do I need to restart something in order
    for this data to start getting collected?
    Here is my config:
    <?xml version="1.0" encoding="utf-8"?>
    <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">  
      <TelemetryModules>
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Implementation.Tracing.DiagnosticsTelemetryModule, Microsoft.ApplicationInsights" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.RuntimeTelemetry.RemoteDependencyModule, Microsoft.ApplicationInsights.Extensibility.RuntimeTelemetry" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCollector.PerformanceCollectorModule, Microsoft.ApplicationInsights.Extensibility.PerfCollector" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.WebApplicationLifecycleModule, Microsoft.ApplicationInsights.Extensibility.Web" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.RequestTracking.TelemetryModules.WebRequestTrackingTelemetryModule, Microsoft.ApplicationInsights.Extensibility.Web" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.RequestTracking.TelemetryModules.WebExceptionTrackingTelemetryModule, Microsoft.ApplicationInsights.Extensibility.Web" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.RequestTracking.TelemetryModules.WebSessionTrackingTelemetryModule, Microsoft.ApplicationInsights.Extensibility.Web" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.RequestTracking.TelemetryModules.WebUserTrackingTelemetryModule, Microsoft.ApplicationInsights.Extensibility.Web" />
      </TelemetryModules>
      <ContextInitializers>
        <Add Type="Microsoft.ApplicationInsights.Extensibility.ComponentContextInitializer, Microsoft.ApplicationInsights" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.DeviceContextInitializer, Microsoft.ApplicationInsights" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.AzureRoleEnvironmentContextInitializer, Microsoft.ApplicationInsights.Extensibility.Web" />
      </ContextInitializers>
      <TelemetryInitializers>
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.TelemetryInitializers.WebOperationNameTelemetryInitializer, Microsoft.ApplicationInsights.Extensibility.Web" />
        <Add Type="Microsoft.ApplicationInsights.Extensibility.Web.TelemetryInitializers.WebOperationIdTelemetryInitializer, Microsoft.ApplicationInsights.Extensibility.Web" />
      </TelemetryInitializers>
      <InstrumentationKey>*snip*</InstrumentationKey>
      <PerformanceCounters>
        <PerformanceCounterConfiguration counterSpecifier="\Memory\Page Faults/sec"/>
      </PerformanceCounters>
    </ApplicationInsights>

    Can you please let us know which documentation you're referring to? It does seem to be outdated.
    Application Insights collects certain performance counters on its own; unfortunately, the list of performance counters is not configurable as of now. The syntax you're using
    <PerformanceCounters>
        <PerformanceCounterConfiguration counterSpecifier="\Memory\Page Faults/sec"/>
      </PerformanceCounters>
    is not supported.
    Performance data will be collected automatically (no further configuration needed) as long as the following element is in your ApplicationInsights.config (and it is indeed present in your sample):
      <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCollector.PerformanceCollectorModule,
    Microsoft.ApplicationInsights.Extensibility.PerfCollector" />
    Performance counters currently collected by Application Insights are:
    \Process(<application process>)\% Processor Time
    \Memory\Available MBytes
    \ASP.NET Applications(<IIS process>)\Requests/Sec
    \.NET CLR Exceptions(<application process>)\# of Exceps Thrown / sec
    \ASP.NET Applications(<IIS process>)\Request Execution Time
    \Process(<application process>)\Private Bytes
    \Process(<application process>)\IO Data Bytes/sec
    \ASP.NET Applications(<IIS process>)\Requests In Application Queue
    \Processor(_Total)\% Processor Time
    There are additional factors that may affect performance collection; this blog article contains a section detailing performance collection in Application Insights SDK:
    http://blogs.msdn.com/b/visualstudioalm/archive/2014/12/11/updated-application-insights-status-monitor-to-support-12-and-later-application-insights-sdk.aspx
    Please check out the section starting with the words "One of the changes made in the .12 version
    of the Application
    Insights for Web Applications SDK is the collection of the following Windows performance counters."
    As you can see from the blog post, you are indeed looking in the right place in the portal (Metric Explorer under Performance Counters), but only the default counters will be collected.
    As a workaround, consider taking advantage of one of TelemetryClient.Track* methods to report data to Application Insights.

  • Optimal read write performance for data with duplicate keys

    Hi,
    I am constructing a database that will store data with duplicate keys.
    For each key (a String) there will be multiple data objects, there is no upper limit to the number of data objects, but let's say there could be a million.
    Data objects have a time-stamp (Long) field and a message (String) field.
    At the moment I write these data objects into the database in chronological order, as i receive them, for any given key.
    When I retrieve data for a key, and iterate across the duplicates for any given primary key using a cursor they are fetched in ascending chronological order.
    What I would like to do is start fetching these records in reverse order, say just the last 10 records that were written to the database for a given key, and was wondering if anyone had some suggestions on the optimal way to do this.
    I have considered writing data out in the order that i want to retrieve it, by supplying the database with a custom duplicate comparator. If I were to do this then the query above would return the latest data first, and I would be able to iterate over the most recent inserts quickly. but Is there a performance penalty paid on writing to the database if I do this?
    I have also considered using the time-stamp field as the unique primary key for the primary database instead of the String, and creating a secondary database for the String, this would allow me to index into the data using a cursor join, but I'm not certain it would be any more performant, at least not on writing to the database, since it would result in a very flat b-tree.
    Is there a fundamental choice that I will have to make between write versus read performance? Any suggestions on tackling this much appreciated.
    Many Thanks,
    Joel

    Hi Joel,
    Using a duplicate comparator will slow down Btree access (writes and reads) to
    some degree because the comparator is called a lot during searching. But
    whether this is a problem depends on whether your app is CPU bound and how much
    CPU time your comparator uses. If you can avoid de-serializing the object in
    the comparator, that will help. For example, if you keep the timestamp at the
    beginning of the data and only read the one long timestamp field in your
    comparator, that should be pretty fast.
    Another approach is to store the negation of the timestamp so that records
    are sorted naturally in reverse timestamp order.
    Another approach is to read backwards using a cursor. This takes a couple
    steps:
    1) Find the last duplicate for the primary key you're interested in:
      cursor.getSearchKey(keyOfInterest, ...)
      status = cursor.getNextNoDup(...)
      if (status == SUCCESS) {
          // Found the next primary key, now back up one record.
          status = cursor.getPrev(...)
      } else {
          // This is the last primary key, find the last record.
          status = cursor.getLast(...)
      }2) Scan backwards over the duplicates:
      while (status == SUCCESS) {
          // Process one record
          // Move backwards
          status = cursor.getPrev(...)
      }Finally another approach is to use a two-part primary key: {string,timestamp}.
    Duplicates are not configured because every key is unique. I mention this
    because using duplicates in JE has more overhead than using a unique primary
    key. You can combine this with either of the above approaches -- using a
    comparator, negating the timestamp, or scanning backwards.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to set up VLAN for DATA and VOIP on SRW248G4P switch?

    Hi guys,
    I am totally new and was given this task to complete. I  really really need help.
    We are using one network 192.168.1.0
    Shared  with data and voip.
    CISCO C870, 5 switches LINKSYS SRW248G4P .
    The  email wrote:-
    On the Linksys switch;
    - create two  different VLANs one for voice and one for data.
    - put a firewall  between the two VLANs (between voice and data) and only enable certain  ports to flow to voice network (inbound tcp 8443 and ssh )
    What  should i do guys? I really need a dumb guide now.
    I know its simple for  you guys but i am not a smart IT fella. Whats the  step by step?

    If the switch is new or you have support on this, then you might try calling the support center.  Here is a link:
    https://www.myciscocommunity.com/community/smallbizsupport
    On the right hand side you can find links to the support center.
    Here is a link to the guide:
    http://www.cisco.com/en/US/products/ps9967/prod_maintenance_guides_list.html
    @ the bottom of this link you can find your switch model, you want the larger of the two.  In this guide it shows you how to create a second vlan.
    Will your router be the firewall between the two?
    Kindest regards,
    Andrew Lissitz

  • How to use the API for DATE, MONTH  AND YEAR

    I would like to use the java api in .util.calender in the java api to get the date.
    How to implement the API for the "DATE","MONTH","YEAR" which are available provide by java?
    can someone give me in one complete code?

    From the Java Developers Almanac 1.4:
        Calendar cal = new GregorianCalendar();
        // Get the components of the date
        int era = cal.get(Calendar.ERA);               // 0=BC, 1=AD
        int year = cal.get(Calendar.YEAR);             // 2002
        int month = cal.get(Calendar.MONTH);           // 0=Jan, 1=Feb, ...
        int day = cal.get(Calendar.DAY_OF_MONTH);      // 1...
        int dayOfWeek = cal.get(Calendar.DAY_OF_WEEK); // 1=Sunday, 2=Monday, ...

  • Performance for full export database.

    Hi,
    Is it possibe to estimate or measure how much time needs to export the whole database?
    Should I use the parameter "direct=y" to speed up the export process?
    Thanks very much.
    Frank

    Hi,
    I tried the export in the database server, but the error is encountered.
    Is it because the user(system) is different from owner(ds_ap_dwh)?
    Anyone can advise me? thanks alot.
    Frank.
    C:\>exp system/manager owner=ds_ap_dwh direct=y file=g:\oracle\export\ds_ap_dwh.d
    mp log=g:\oracle\export\ds_ap_dwh.log
    Export: Release 9.2.0.1.0 - Production on Tue May 29 23:48:36 2007
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle9i Release 9.2.0.1.0 - Production
    JServer Release 9.2.0.1.0 - Production
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user DS_AP_DWH
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user DS_AP_DWH
    About to export DS_AP_DWH's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    EXP-00056: ORACLE error 24324 encountered
    ORA-24324: service handle not initialized
    EXP-00056: ORACLE error 24324 encountered
    ORA-24324: service handle not initialized
    EXP-00000: Export terminated unsuccessfully

Maybe you are looking for