Import optimization

Hello everyone. I am a C++ programmer who knows enough Java to be dangerous. I have been assigned to port one of our existing applications to a Java/J2ME environment to be run on a cell phone. I am trying to find out the best techniques for writing efficient bytecodes. I am already using an obfuscator to shrink my JAR file. The compression level is set to 9 (this is as far as it goes in NetBeans). I am wondering if more optimization can be acheived. If I use wildcards in my import statements will my compiled class file be larger than if I were to use the specific class names in my import statements? Are there any commonly used techniques to acheive a higher level of optimization?

import just tells the compiler where to look for classes used in the source.import java.util.*;
List a = new ArrayList();andimport java.util.List;
import java.util.ArrayList;
List a = new ArrayList();both compile tojava.util.List a = new java.util.ArrayList();

Similar Messages

  • Import/optimize question

    I'm importing videos shot with my digital camera which takes great HD movies. I notice when I import them into iMovie 11, some of them will come out more grainy than the original. I'm selecting the Optimize video checkbox when importing. Is this what's causing the problem? I don't quite understand what the Optimize function is all about. I see some iMovies which other have made that are just crystal clear but a few of my clips, especially ones which are a bit darker, come out in a lower quality once they've been imported. Not sure of the proper way to do this. Any ideas/suggestions? Should I always be optimizing my video imports and if so, how do I get them to be as crystal clear as the originals? The movies my camera takes are 30fps Motion JPEG. They are 1280 x 720 HD and the QT info panel says:
    Photo - JPEG, 1280 x 720, Millions
    Linear PCM, 16 bit big-endian signed integer, 2 channels, 16000 Hz

    Optimize usually indicates that iMovie is translating the videos into the Apple Intermediate Codec format so that iMovie can easily preview and edit the video accurately down to the 1/30th of a second video frame. If you don't want iMovie to do that, you can import it not optimized and Full Size as well just to force iMovie to touch the video a little less as compared to Large Size and Optimized. Might try just doing a test with one batch of Events coming in (don't erase the clips off the camera) then re-import with the Full Size, unoptimized settings and see if you can tell a difference between the two settings. Full Size, unoptimized is going to take up a lot of Hard Drive space so be aware there's no free lunch when choosing those settings (Full Size = Large File sizes)

  • 培训邀请函 - SAP DB2 Migration Optimization workshop -- SAP DB2迁移优化 (免费)

    尊敬的客户,您好:
    为了帮助SAP客户更好地进行系统异构迁移,提高用户在DB2数据库环境下的SAP迁移和管理能力,我们代表IBM公司邀请您参加由IBM公司提供的《SAP DB2 Migration Optimization》培训。
    此次免费培训将在北京(4月12日 -4月14 日 )举办。授课对象为具有一定SAP管理经验,希望进一步深入了解DB2 LUW,并了解如何在SAP环境下有效进行异构迁移的SAP系统管理员,DBA和技术顾问。培训为每位学员提供实验环境,其授课目标为:
    u2022     了解DB2的基础知识;
    u2022     掌握使用SAP和DB2工具进行高效安全系统异构迁移;
    u2022     了解迁移监控工具
    u2022     了解迁移相关优化工具和方法;
    u2022     了解针对SAP ERP及BW的迁移优化;
    u2022     掌握迁移过程中基本问题分析能力。
    培训具体事项请见日程安排。
    如您决定参加培训,请填写以下回执表在2011年4月5日之前通过Email予以确认。
    联系人  郭亦群 Guo Yi Qun
    Tel:  86-10 63614570         Email: guoyiq at cn.ibm.com
    Mobile: 86-13701235290                  
    谢谢您的合作!
    (注:交通及住宿自理)
    SAP  DB2迁移优化培训日程
    地点:和盛嘉业大厦605室/北京易智康达科技有限公司
         北京市海淀区中关村大街32号
    时间: 4月12u201414日
    (每日上课时间:9:30-17:30)
    Day 1
    1.1    DB2 & SAP Overview
    1.2    Migration Overview
    1.3    Migration Tools Usage and Optimization
    1.4    Hands On Lab 1
    Day 2
    2.1    Advanced Optimization Techniques
    2.2    Hands On Lab 2
    2.3     Monitoring Tools
    2.4     DB2 Layout and Configuration Options
    2.5     Hands On Lab 3
    2.6     Import Optimization (part 1)
    2.7     Hands On Lab 4
    Day 3
    3.1   Import Optimization (part 2)
    3.2     Hands On Lab 5
    3.3     Import Optimization (part 3)
    3.4     Hands On Lab 6
    3.5     Special Considerations for Migrating BI systems
    3.6     SAP/DB2 Migration Optimization u2013 Summary
    3.7     Q&A
    回执表
    公司名称:___________________________________________________________
    地址:___________________________________________________________
    姓名:          职务:  
    Email:          电话:  
    姓名:          职务:   
    Email:          电话:

    不错,可以去一下。

  • Trouble Importing H.264

    I do appolgoize in advance if this was asked elsewhere....I have yet to be able to find the solution.
    I am very new to Imovie and Mac - so sorry if this sounds newbie - it's because it is.
    I am using a Canon HF20 and have some home video footage I am working with. I am typically using Adobe Premiere and Encore for most of my video editing/rendering because of the ability to edit on the fly in the AVCHD format. However, for family vacations - I enjoy using Imovie for the ease of creating a quick 3-4 min "highlight" clip. The way I accomplished this - as read everywhere else is I am converting my .mts clips to H.264 in 1080p resolution using adobe media encoder. I did not have a problem with Imovie '09 - but when I recently upgraded to Imovie '11 - the clip does not import properly.
    With Imovie '09 it would take 2 hours for it to import/optimize my clip (usually 2-2.5 hours in length but the entire movie is present. With Imovie 11 - it takes 10min to import - the movie is there in it's entire lenght, with fully working audio - however only the first frame of the movie is present throughout the entire imported clip. I am unsure if this is a bug with the included "iFrame" that is in imovie 11 and the updated imovie 09, but i am wondering if anyone else has experienced this and/or has a solution. Thanks for your help.
    Darth Gup

    Darth, I have nearly the same issues as you. I've lately been working in CS5 Premiere Pro for my Sony AVCHD footage. As I wanted to make some quick videos in iMovie11 today (to upload to my Mobileme gallery to share) I exported (using Adobe Media Encoder) an HDTV 1080 H.264 reference footage thinking it would import into iMovie 11. Instead the imported video clip into iMovie plays audio but video displays a still frame.
    I'm going to work a bit more with this, perhaps will have to export to Apple TV format (1280X720) and import that into iMovie11. But the whole point was to work with a high-res reference video, wasn't it. Sigh.

  • Can't import mts/HD video to imovie 09

    Hi,
    I have the Sony HDR-TG5V & have successfully imported mpg/SD videos to imovie but haven't been able to import mts/HD videos. I've tried deleting the SD from my video camera but it still doesn't recognize the mts/HD videos. For some reason the SD videos keep popping up even though I've deleted them from the camera. Please help!

    Yes the ffmpeg solutions seems to work and I was very happy with this at first beceause this also eliminates re-encoding whole video clips again.
    BUT! I do have serious performance issues inside iMovie '09 with these newly created M4V files. Once imported skimming through them goes extremely slow. An I mean extreeemely slowww... So I'm unable to use them. When I choose the optimize setting and let iMovie convert the file to Quicktime after all, optimizing also takes a very large amount of time. A 30 seconds clip took me about an hour to import/optimize and I don't have this issue when importing from camera. Once imported and optimized in Quicktime the clip works perfect just like any other clip but it's gonna tame me a tremendous amount of time to do this with the large amount of MTS files I have (incorrectly) saved on an external hard drive.
    There must be some sort of solution to wrap up the MTS files in a native AVCHD folder structure so that iMovie will import them? And I mean a solution without re-encoding them to MPEG4 or whatsoever beceause that would be costing me too much time as well.
    Any other tips? Maybe writing the files back to the camcorder but unfortunately Canon only supplies Windows software for doing that. Anyone had experience with that?

  • Why optimize makes H264 files

    Long time legacy user, working my way through 10.0.3 trial version... I'm confused about the import / optimize feature. Manual etc say when importing media (in this case panasonic LX-5 AVCHD footage) it should be optimized to ProRes 422 yet when I "inspect" the imported footage in my timeline, it say H.264. Sorry to be a newb, but what I am doing wrong here?
    Thanks
    John

    Nothing. The inspector shows the format of the original media. Not sure why, but there it is. Use Reveal in finder to check you have the optimzied media in the timeline.

  • Setting Up a Java Web Server on Linux

    Greetings,
    After a lot of reading, I've decided to learn Java server-side technologies (I already know Java (desktop)
    but with little experience) in order to develop my database driven dynamic website projects.
    However, I'm completed lost between the hundreds of acronyms related to Java and the "thousands" of technologies involved.
    Google hadn't help me understand what they really are and how they relate with each other.
    Even though is easy to find what they stand for and some faqs/descriptions, it hasn't been enought to clear my mind.
    I want my webserver application to do, among other things, the following:
    Retrieve Data from a database, do generate dynamic html files.
    Story Data on the database, from html forms.
    Send automaticly emails for multiple users. (newsletter for example)
    E-Commerce.
    Html graber/parser (I don't know if this is the right word, I mean a program that for example, goes to a URL with an html
    table, and stores that information in a database table, as long as the format didn't change).
    Real Time Chat feature.
    Database connection pooling, caching, and other important optimizations.
    I'm not asking for how to develop or configure this features nor what components and programs, must be installed.
    At the moment, I only want to know how to set up a Java solution that will support them when they are needed.
    *** Linux ***
    I want to set up my webserver on a linux distribution.
    In that respect, I don't know if I should choose RedHat or Fedora Core.
    I've heard that fedora is better for a webserver (having in mind that Red Hat Enterprise AS/ES aren't free).
    But I've read in several sun turorials/webpages references to RedHat (on a J2EE context).
    So I'm wondering if is better to go for Fedora or RedHat. Are there pros & cons? They equally support Java?
    *** Technologies ***
    Then in starts the acronyms problem.
    What do I need to install, in what order, for what?
    Some aconyms and technologies, I have read about but don't fully understand are the following:
    Apache
    J2EE
    J2EE 1.4 SDK
    SUN Java System Application Server Platform Edition 8
    J2SE 1.4.2 SDK
    SUN Java Enterprise System
    Tomcat
    Struts
    Cocoon
    JBoss
    I already know J2EE is only a specification. What implements that specification? Sun Java System Application Server?
    J2SE SDK? or Tomcat?
    What is the role of each technology, namely Apache, Tomcat, Sun Application Server?
    What is the Sun Java Enterprise System?
    How do Struts, Cocoon, JBoss relate to Java?
    Which of these technologies are mutually exclusive (analogous)?
    *** Doubts ***
    Then, I have some doubts that are keeping me from starting to study seriously the important techonologies, because I don't
    want to lose lot's of time and effort learning them to after a dillusion start everything again with PHP (the language
    which make me think and read a lot before going for Java). To keep this from happening, I would like to know the following:
    I want to use Java for developing websites with commercial/profitable use for my company, in some throught e-commerce,
    and in other by banners. I want to do everything by the book, with required licenses.
    ------ Java is Free?
    Is Java Completely Free or it might be possible that at a certain point, when I need some component or library,
    multi-machine webserver, performance or security for high-traffic I will see "buy now" instead of "download now"?
    For what they may "ask for money"? and what are the disavantages if I can't buy?
    For example, "Java System Application Server Standard Edition 8.1 2005Q1" and
    "Java System Application Server Enterprise Edition 8.1 2005Q1" cost $2.000 and $10.000 respectively. That is money.
    (http://www.sun.com/software/products/appsrvr/index.xml). What are the disavantages if stick to the free edition?
    Features like (from sun site)
    Standard Edition:
    Extends the capabilities of the Platform Edition to provide secure, remote, multi-instance, multi-machine management.
    It is aimed at high-volume applications and Web services.
    Enterprise Edition:
    Further extends the capabilities of the Standard Edition to provide continuous availablity for applications and
    Web services. It is aimed at business critical, high value applications and Web services.
    Suppose I achive lots of traffic and I keep the free platform edition.
    I won't be able to have a multi-machine webserver set up correctly?
    What are the drawbacks? How big are the performance penalties?
    ------ Technologies Availability
    Finnaly, I have the idea (I don't know if it's accurate) that there are Sun versions and Open-Source Free versions
    of doing or supporting the same Java related thing.
    Despite the way I choose to set up a Java webserver, I will always have all J2EE techonolies like:
    Java API for XML-Based RPC (JAX-RPC), JavaServer Pages, Java Servlets, Enterprise JavaBeans components,
    J2EE Connector Architecture, J2EE Management Model, J2EE Deployment API, Java Management Extensions (JMX),
    J2EE Authorization Contract for Containers, Java API for XML Registries (JAXR), Java Message Service (JMS),
    Java Naming and Directory Interface (JNDI), Java Transaction API (JTA), CORBA, and JDBC data access API.
    I really appreciate some help. I could learn the basics of all this "stuff" before asking. But the point why I'm asking
    is precisely not starting to learn something I may won't use. Obviously, I will have to make a lot of reading during maybe
    months before writing the first line of code of my projects, but being certain that it will fit my needs.
    I will be very thankful if you can enlightme in my fedora/redhat, setup, free/cost and technologies avaibility issues.
    Thanks Beforehand,
    Pedro Vaz

    Apache is a free Web-server.
    Tomcat is a servlet (and JSP) container. It can be stand-alone or can be used together with a Web server (like Apache).
    J2EE's scope is much wider than the servlet stuff, but a standalone Tomcat is a good starting point to servlets and JSP.
    One of our "real-world" applications is run by a standalone Tomcat using POI and Velocity. I did not regret this architectural decision.

  • Spatial Performance Problem

    I'm new to GIS development, and hereby got a performance comparison.
    Techs adopted time consumed to display the whole map in one time
    ESRI SDE + Oracle Spatial 6 Mins
    ESRI SDE + Oracle 1 Min 20 Sec
    ESRI SDE + SQLSERVER 50 Sec
    ESRI SDE + Shapefile 30 Sec
    Shape file size 550M, totally 480k rows
    Can anyone tell me why SDE + Spatial works so low performance than SDE + SQL SERVER, and what can I do to improve the performance when adopt just SDE + ORACLE(just for storage like normal RDBMS).
    I don't think Oracle will work worse than SQL Server in SDE environment. But what should I pay attention to during using Oracle to store map data for SDE.
    Is that not a normal scense to use SDE with Oracle Spatial? Why I got so bad performance? I just migrate data from Shapefile to SDO_GEOMETRY type and add indexes as Spatial required.
    Any one can leave your comment if you know something about this.
    Thanks in advance.

    Here is a partitioning example as promised. Note an upcoming talk at Oracle Open World will show how to extend this model to include spatial partitioning, where we will describe some really important optimizations that will bring scalability (already a differentiator) to new levels. After Oracle Open World we'll post that information to OTN as well (please come talk to us if you happen to go).
    -- First, remove table partition_sales if it exists
    drop table partition_sales;
    -- Create table partition_sales. This table is partitioned
    -- by date range, breaking each year into 4 quarters.
    -- What users see is a single table called partition_sales
    -- but in reality there is one smaller table created for
    -- each partition
    -- For simplicity we are loading all partitions into the same
    -- tablespace. Most often they are loaded into separate tablespaces
    -- for performance, scalability, and manageability reasons
    CREATE TABLE partition_sales
    ( customer_id number,
    sale_date date,
    sale_amount number(10,2),
    geom mdsys.sdo_geometry
    PARTITION BY RANGE (sale_date)
    (PARTITION BEFORE_2003
    VALUES LESS THAN (TO_DATE('01-JAN-2003','DD-MON-YYYY')),
    PARTITION Q1_2003 VALUES LESS THAN (TO_DATE('01-APR-2003','DD-MON-YYYY')),
    PARTITION Q2_2003 VALUES LESS THAN (TO_DATE('01-JUL-2003','DD-MON-YYYY')),
    PARTITION Q3_2003 VALUES LESS THAN (TO_DATE('01-OCT-2003','DD-MON-YYYY')),
    PARTITION Q4_2003 VALUES LESS THAN (TO_DATE('01-JAN-2004','DD-MON-YYYY')),
    PARTITION Q1_2004 VALUES LESS THAN (TO_DATE('01-APR-2004','DD-MON-YYYY')),
    PARTITION Q2_2004 VALUES LESS THAN (TO_DATE('01-JUL-2004','DD-MON-YYYY')),
    PARTITION Q3_2004 VALUES LESS THAN (TO_DATE('01-OCT-2004','DD-MON-YYYY')),
    PARTITION Q4_2004 VALUES LESS THAN (TO_DATE('01-JAN-2005','DD-MON-YYYY')),
    PARTITION AFTER_2004
    VALUES LESS THAN (MAXVALUE));
    -- Each of these inserts goes into a separate partition.
    -- Although there are only 9 rows inserted, each into a separate
    -- partition, if you had 50 million rows they would be divided into
    -- smaller, more manageable pieces
    insert into partition_sales values (61243, '31-DEC-2002', 18764.23,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-73.943849, 40.66980, NULL), NULL, NULL) );
    insert into partition_sales values (4576, '15-JAN-2003', 27797.05,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-71.017892, 42.336029, NULL), NULL, NULL) );
    insert into partition_sales values (1161, '22-MAY-2003', 16222.50,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-76.610616, 39.30080, NULL), NULL, NULL) );
    insert into partition_sales values (55033, '09-SEP-2003', 1211.33,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-77.016167, 38.90505, NULL), NULL, NULL) );
    insert into partition_sales values (768, '15-DEC-2003', 84397.61,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-73.943851, 40.66975, NULL), NULL, NULL) );
    insert into partition_sales values (61243, '05-JAN-2004', 21764.26,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-73.943849, 40.66980, NULL), NULL, NULL) );
    insert into partition_sales values (6474, '17-JUN-2004', 76411.81,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-85.256956, 35.066209, NULL), NULL, NULL) );
    insert into partition_sales values (505, '27-JUL-2004', 104100.89,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-74.173245, 40.7241, NULL), NULL, NULL) );
    insert into partition_sales values (9151, '11-OCT-2004', 44298.66,
    mdsys.sdo_geometry(2001, 8307,
    mdsys.sdo_point_type(-79.976702, 40.43920, NULL), NULL, NULL) );
    commit;
    -- Let's make sure what we thought happened actually did happen
    -- The following allows us to see how many rows are stored in each partition
    exec dbms_stats.gather_table_stats('SCOTT','PARTITION_SALES');
    select partition_name,num_rows
    from user_tab_partitions
    where table_name='PARTITION_SALES';
    -- Now the data is loaded, we can spatially index it
    -- First, insert into user_sdo_geom_metadata
    delete from user_sdo_geom_metadata where table_name = 'PARTITION_SALES'
    and column_name = 'GEOM';
    insert into user_sdo_geom_metadata values ('PARTITION_SALES', 'GEOM',
    mdsys.sdo_dim_array (
    mdsys.sdo_dim_element ('LONG', -180, 180, 1),
    mdsys.sdo_dim_element ('LAT', -90, 90, 1)),
    8307);
    -- Now create the index. Note the keyword local is used
    -- which creates a separate index for each partition
    -- Because this example is simple, we didn't show storage
    -- of each spatial index partition in a separate tablespace,
    -- which is the common usage, again for performance, scalability,
    -- and manageability.
    drop index partition_sales_sidx;
    create index partition_sales_sidx on partition_sales (geom)
    indextype is mdsys.spatial_index parameters ('layer_gtype=point')
    local;
    -- Any query that uses the partition key (the date) will only search the
    -- partition associated with that date:
    select customer_id, sale_amount
    from partition_sales
    where sale_date = '17-JUN-2004'
    and sdo_relate(geom,
    mdsys.sdo_geometry(2003, 8307, null, mdsys.sdo_elem_info_array(1,1003,1),
    mdsys.sdo_ordinate_array(-86,34, -85,34, -85,36, -86,36, -86,34)),
    'mask=anyinteract querytype=window') = 'TRUE';
    -- The previous query only touched a single partition.
    -- You don't have to specify the partition key, though.
    -- All partitions can be searched if required. To speed up
    -- these kinds of queries, you can search all partitions in
    -- parallel by specifying the keywork parallel in the
    -- create index statement, or altering the index
    alter index partition_sales_sidx parallel 4;
    -- No date specified in this query, so all partitions may be searched
    select customer_id, sale_amount, sale_date
    from partition_sales
    where sdo_relate(geom,
    mdsys.sdo_geometry(2003, 8307, null, mdsys.sdo_elem_info_array(1,1003,1),
    mdsys.sdo_ordinate_array(-75,38, -70,38, -70,45, -75,45, -75,38)),
    'mask=anyinteract querytype=window') = 'TRUE';

  • How I sped up my LR4

    Previously I was deleting/renaming the LR4 preferences file to speed up LR4, but I only found this to be a temporary fix. It didn't fix it permanently. A few people here are saying it's caused by a corrupt preferences file. I haven't explored the route of regenerating the file and making it read-only so LR4 can't "corrupt" it.
    However, I have done some exhaustive researching (mainly through google) on why LR4 has been a turtle. A lot of people claim that it used to be fast (almost as fast as LR3 days). I didn't understand why LR3 was so much faster than LR4 and it was infuriating when I needed to sit down and edit. But after doing a couple things, it is pretty quick for me now and has stayed this way. Your set-up might vary, but here is mine:
    2011 MBP 17" i7 2.2ghz 16gb ram
    OCZ Vertex 4 SSD
    Thunderbolted to an Apple Cinema 24" and daisy chained to the Promise Pegasus R4 array.
    Here is what I changed:
    - I made OSX start in 64bit mode, when it was only running 32bit normally (important since this allows OSX to access the full 16gb of ram)
    - Potentially corrupted ICC colour profiles. So I went ahead and backed them up and deleted mine and recalibrated afterwards if required.
    - LR4.2 cache size. I increased it from 20gb to 80gb on the same drive that LR4 was running. Purge your Cache.
    - I found that OSX on my MBP was tremendously slower when I hooked up an external display so I disabled the internal display (http://gizmodo.com/5938452/a-trick-to-make-using-an-external-monitor-with-your-macbook-way -better) and this has been incredibly better. Has to do with the graphics card having to support two displays when natively it is only meant for one.
    - Resolution. I find if I decreased the resolution of my cinema display, from 2560 to 1920x? ( the highest resolution of the 17" mbp) LR4 is marginally faster. You can even test this by resizing the LR4 window and seeing how much faster it is.
    - LR4 Catalog Settings -> File Handling.
    -> Standard Preview Size (set to 1024, unless you need higher)
    -> Preview Quality (set to low)
    - Render 1:1 previews on Import
    - Optimize Catalog
    Hope this helps some of you LR4 users out. Your mileage may vary

    Hi,
    Here are few more steps to try apart from you have already tried.
    http://helpx.adobe.com/lightroom/kb/optimize-performance-lightroom.html
    Please skip those solution which you have already tried.
    Thanks,
    Sumit Singh

  • Reference partitioning - thoughts

    Hi,
    At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
    Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.
    Large fact table similarly partitioned date range-hash local bitmap indexes
    Suggested to use would get automatic partition-wise joins if used reference partitioning
    Would have thought would get that with range-hash on both dimension
    Any disadvtanatges with reference partitioning.
    Know can't us range interval partitioning.
    Thanks

    >
    At moment, the large dimension table and large fact table are have the same partitioning strategy but partitioned independently(range-hash)
    the range column is a date datatype and the hash column is the surrogate key
    >
    As long as the 'hash column' is the SAME key value in both tables there is no problem. Obviously you can't hash on one column/value in one table and a different one in the other table.
    >
    With regards null values the dimesnion table has 3 levels in it (part of a dimensional model data wqarehouse)i.e. the date on which tabel partitioned is only at the loest level of the diemnsion.
    >
    High or low doesn't matter and, as you ask in your other thread (Order of columsn in table - how important from performance perspective the column order generally doesn't matter.
    >
    By default in a diemsnional model data warehouse, this attribute not populated in the higher levels therefore is a default null value in the dimension table for such records
    >
    Still not clear what you mean by this. The columns must be populated at some point or they wouldn't need to be in the table. Can you provide a small sample of data that shows what you mean?
    >
    The problem the performance team are attempting to solve is as follows:
    the tow tables are joined on the sub-partition key, they have tried joined the two tables together on the entire partition key but then complained they don'y get star transformation.
    >
    Which means that team isn't trying to 'solve' a problem at all. They are just trying to mechanically achieve a 'star transformation'.
    A full partition-wise join REQUIRES that the partitioning be on the join columns or you need to use reference partitioning. See the doc I provided the link for earlier:
    >
    Full Partition-Wise Joins
    A full partition-wise join divides a large join into smaller joins between a pair of partitions from the two joined tables. To use this feature, you must equipartition both tables on their join keys, or use reference partitioning.
    >
    They believe that by partitioning by reference as opposed to indepently they will get a partition-wise join automatically.
    >
    They may. But you don't need to partition by reference to get partition-wise joins. And you don't need to get 'star transformation' to get the best performance.
    Static partition pruning will occur, if possible, whether a star transformation is done or not. It is dynamic pruning that is done AFTER a star transform. Again, you need to review all of the relevant sections of that doc. They cover most of this, with example code and example execution plans.
    >
    Dynamic Pruning with Star Transformation
    Statements that get transformed by the database using the star transformation result in dynamic pruning.
    >
    Also, there are some requirements before star transformation can even be considered. The main one is that it must be ENABLED; it is NOT enabled by default. Has your team enabled the use of the star transform?
    The database data warehousing guide discusses star queries and how to tune them:
    http://docs.oracle.com/cd/E11882_01/server.112/e25554/schemas.htm#CIHFGCEJ
    >
    Tuning Star Queries
    To get the best possible performance for star queries, it is important to follow some basic guidelines:
    A bitmap index should be built on each of the foreign key columns of the fact table or tables.
    The initialization parameter STAR_TRANSFORMATION_ENABLED should be set to TRUE. This enables an important optimizer feature for star-queries. It is set to FALSE by default for backward-compatibility.
    When a data warehouse satisfies these conditions, the majority of the star queries running in the data warehouse uses a query execution strategy known as the star transformation. The star transformation provides very efficient query performance for star queries.
    >
    And that doc section ALSO has example code and an example execution plan that shows that the star transform is being use.
    That also also has some important info about how Oracle chooses to use a star transform and a large list of restrictions where the transform is NOT supported.
    >
    How Oracle Chooses to Use Star Transformation
    The optimizer generates and saves the best plan it can produce without the transformation. If the transformation is enabled, the optimizer then tries to apply it to the query and, if applicable, generates the best plan using the transformed query. Based on a comparison of the cost estimates between the best plans for the two versions of the query, the optimizer then decides whether to use the best plan for the transformed or untransformed version.
    If the query requires accessing a large percentage of the rows in the fact table, it might be better to use a full table scan and not use the transformations. However, if the constraining predicates on the dimension tables are sufficiently selective that only a small portion of the fact table must be retrieved, the plan based on the transformation will probably be superior.
    Note that the optimizer generates a subquery for a dimension table only if it decides that it is reasonable to do so based on a number of criteria. There is no guarantee that subqueries will be generated for all dimension tables. The optimizer may also decide, based on the properties of the tables and the query, that the transformation does not merit being applied to a particular query. In this case the best regular plan will be used.
    Star Transformation Restrictions
    Star transformation is not supported for tables with any of the following characteristics:
    >
    Re reference partitioning
    >
    Also this is a data warehouse star model and mentioned to us that reference partitioning not great with local indexes - the large fact table has several local bitmpa indexes.
    Any thoughts on reference partitioning negatively impacting performance in this way compared to standalone partitioned table.
    >
    Reference partitioning is for those situations where your child table does NOT have a column that the parent table is being partitioned on. That is NOT your use case. Dont' use reference partitioning unless your use case is appropriate.
    I suggest that you and your team thoroughly review all of the relevant sections of both the database data warehousing guide and the VLDB and partitioning guide.
    Then create a SIMPLE data model that only includes your partitioning keys and not all of the other columns. Experiment with that simple model with a small amount of data and run the traces and execution plans until you get the behaviour you think you are wanting.
    Then scale it up and test it. You cannot design it all ahead of time and expect it to work the way you want.
    You need to use an iterative approach. That starts by collecting all the relevant information about your data: how much data, how is it organized, how is it updated (batch or online), how is it queried. You already mention using hash subpartitioning but haven't posted ANYTHING that indicates you even need to use hash. So why has that decision already been made when you haven't even gotten past the basics yet?

  • NIO + long-living connections

    Hi!
    I'm experimenting with nio http service and have got a problem with persitent (Connection: Keep-Alive) connections: I can not get more rather ~25 requests per second (using ApacheBench) for single connection. CPU is almost free(!!), and I have not any delays in my code at all, of course. Have tried with Netbeans profiler - almost all time is spent inside selector.select(to).
    Any thoughts?
    Edited by: anli on Jun 15, 2008 5:52 PM

    It's a historically very important optimization which allows TCP to coalesce small outgoing packets into one larger packet, at the cost of increased latency. It's an IETF requirement that it be off by default for any host connected to the Internet, and Sun (and the authors of the Berkeley Sockets API before them) no doubt took their lead from that.
    As the only important thing is when the last part of the message arrives, you might consider turning it off again and building up the complete response in a buffer before calling channel.write().

  • Speeding-up FCPX: 'purge' = snake oil or magic cure??

    today, i made a … weird observation:
    I dropped in a SDcard with about 20GBs AVCHD, ~180clips.
    Import, optimize, proxy - my routine, to give my under-powered MacMini 2.26 a chance to handle 1080/50p.
    usually, I do that overnight, knowing, this is a time-consuming process.
    Today, I did it by day - noticing: 2% after 2h ... oooops!
    Just out of curiosity, me no engineer, I launched Activity Monitor.
    ugly: half my RAM blue = inactiv.
    more ugly: 0.2MB free, 600MB swap ....
    Someone told me lately, to type 'PURGE' in Terminal, which will 'free' inactive memory.
    as mentioned: Terminal, Unix, commands - all terra incognita for me!
    5sec later, brave Mini told me 4.5GBs free - aaaand: the activity display within FCPX starts to count beyond snail speed. within ~5h, my 3h footage was imported, optimized and proxy-ed! (that is fast on my machine...... )
    Questions:
    #1) Why does FCPX 'occupy' so much RAM on import? In my humble understanding, it just shovels data from one drive/card to another, plus some minor indexing/read out of meta data (180 clips - that sounds too me not thaaaat much)
    #2) Why doesn't it 'free' that RAM, when starting with activity No2, transcoding?? It still doesn't make use of the additionel, freed 4GBs ... but FCPX transcodes noticeable faster. Activity says: before 5% CPU usage, now 160-180%?
    … or is it just me? (OS 10.84, FCPX 0.8 ) ....
    (my Mac is not in 'mint condition')
    Purge = a wonder??

    innocentius wrote:
    … Coul it be because I always work with short projects? …
    just to be nitpicky:
    it's about the imports/Events, not 'projects' .....
    no, I know what you mean.
    I haven't observed the import process never, ... I know, a quick take to be added to an existing Events gets Imported (=and transcoded), blazing fast - even on my antique hardware.
    It seems so, that larger (no numerical benchmark) imports, large by number and MBs, make my 8GB Ram stumble.
    On another board, I got the advice to import only, and then trigger the transcoding manually.-
    ... in the next weeks, I'll test it, do screenshots, use my stopwatch.
    thanks for sharing your observation!

  • Oracle automatic statistics optimizer job is not running after full import

    Hi All,
    I did a full import in our QA database, import was successful, however GATHER_STATS_JOB is not running after sep 18 2010 though its enable and scheduled, i did query last_analyzed table to check and its confirmed that it didnt ran after sep18,2010.
    Please refer below for the output
    OWNER JOB_NAME ENABL STATE START_DATE END_DATE LAST_START_DATE NEXT_RUN_D
    SYS GATHER_STATS_JOB TRUE SCHEDULED 18-09-2010 06:00:02
    Oracle defined automatic optimizer statistics collection job
    =======
    SQL> select OWNER,JOB_NAME,STATUS,REQ_START_DATE,
    to_char(ACTUAL_START_DATE, 'dd-mm-yyyy HH24:MI:SS') ACTUAL_START_DATE,RUN_DURATION
    from dba_scheduler_job_run_details where
    job_name='GATHER_STATS_JOB' order by ACTUAL_START_DATE asc; 2 3 4
    OWNER JOB_NAME STATUS REQ_START_DATE ACTUAL_START_DATE
    RUN_DURATION
    SYS GATHER_STATS_JOB SUCCEEDED 16-09-2010 22:00:00
    +000 00:00:22
    SYS GATHER_STATS_JOB SUCCEEDED 17-09-2010 22:00:02
    +000 00:00:18
    SYS GATHER_STATS_JOB SUCCEEDED 18-09-2010 06:00:02
    +000 00:00:26
    What could be the reason for GATHER_STATS_JOB job not running although its set to auto
    SQL> select dbms_stats.get_param('AUTOSTATS_TARGET') from dual;
    DBMS_STATS.GET_PARAM('AUTOSTATS_TARGET')
    AUTO
    Does anybody has this kind of experience, please share
    Apprecitate your responses
    Regards
    srh

    ?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
    Am i missing anything here, do i look for some parameters settings
    So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Hope this helps.
    -Anantha

  • "AVCHD" gets choppy when I try to edit in "FCPX" ... should I import and optimize video or create a proxy media???

    So I'm shooting some video on a Sony CX430
    60fps & 24fps
    Both on AVCHD format.
    And working out of my MacBookPro
      Processor 2.8Ghz (intel core 2 duo)
    Mem 8GB 1067 MHz ddr3
    1TB hard drive Space (440GB actual free space right now)
    im having trouble editing the video. the video becomes choppy and seems to drop frames. its no where near as smooth as when i play it back on the camera itself.
    1. should I be transcoding the files as I import them. ?
    2. And if so, which option.
       I. Create optimize media.  Or
      II. Crete proxy Media ?
    once I have played and done some editing in FCP.X
    3. Do I need to buy and use a compressor to get them burned on to a Blu Ray or DVD. ?
    4. What is a good Blu Ray burner (external) I should use/buy?

    #1 on my completely under powered MacMini, 2.26GHz/8GB, I'm noticing, it's faster to make the import/conversion a two-step process: step1 import, step2 after import select all clips, cmd-click, convert. If you import 'over night', I wouldn't care anyhow.....
    #2 for stutter free edit, I prefer Proxies; … Optimize is only recommended when you use a seperate, ext. hard-drive, due to sheer amount of data (~220GBs/h)
    #3 FCPX allows creation of DVD/BluRays - but: very simple ones, to be exact: two 'designs', just one Project, no fancy menus etc. 'Disk' isn't Apple's preferred delivery format for many years …
    #4 I'm not burning disks; LG and Samsung seem to have a good track-record ...

  • Can I have someone optimize my iWeb site and then re-import it?

    I hired someone to optimize my iWeb site put the keywords in the Title, Meta and
    place the keywords in the ALT-tags for the images.
    Is it possible to re-import the edited website back into iWeb 08. I've tried dragging it, but it didn't work, I've tried creating a new site but only have the option of themes. Am I missing something that somebody knows about? I really appreciate some input,
    Also, do you know if there is a better way to optimize these sites inside of iWeb or a separate software program I should use?

    Have a look here for more information on SEO....
    http://alyeska.altervista.org/en/iWebiWebGoogle.html
    and here for optimization....
    http://www.tonbrand.nl/

Maybe you are looking for

  • How to use CLIENT_TEXT_IO more fast?

    Hello, I'm using new features for Forms 10g like WebUtil, but I can't fine work with file manipulation - CLIENT_TEXT_IO. The loading of the text file with 7mb size, delay 15-30 minutes, is not good. I like know one other way for this works more fast,

  • Making images big on the same screen

    I want to have a picture look bigger (I think it will be better using the roll over, but any option that works is good) but with out changing anything else on the screen. Not in another window, not changing the frame, just on top of the window I alre

  • How does BPEL connect to JMS??

    Hi All, I have a BPEL process which basically logs a message into a JMS Q..This process runs forever and when ever a new message is to be logged we return the same instance instead of a new instace.. Now a couple of questions I have on this is - How

  • SCEP 2012

    I tried to install SCEP 2012 on a brand new Windows Server 2012 R2 Essentials but it fails, telling me it cannot be installed on my operating system... (Found an image from someone with a similar problem: http://db.tt/ODUZzmr9) Is Essentials not a su

  • Is it possiblr to install cvs server using the terminal?

    Hi all, I am trying to install version control system(CVS server).I just fallowed the instruction in the given link:https://help.ubuntu.com/10.04/serverguide/cvs-server.html.When i enterd some commands through terminal , it display an error "command