My ideal iP4 case

Hoping someone can help me locate my ideal iP4 case?
1. Crystal clear color. To show off beauty of the phone.
2. Thin silicone or rubber material as not to add bulk or weight.
3. Slick enough material to slip in and out of my jean pants pocket without sticking.
4. Good openings for ports and easy access to volume, vibrate, and sleep buttons.
5. Cloned iP4 fit and finish.
6. Durable enough to withstand dirt and grim that comes along with everyday use.
7. Easy and durable enough to put on and take off daily (using the dock).
8. Is protective enough to prevent damage from minor drops (12"-18") and setting on hard surfaces
9. Ability to dock with the case on.......this one is far reaching I know.
These are my desires!

Griffin Technology - Reveal Case for Apple® iPhone® 4 - Black/Clear. I've tried a number of cases and this one is perfect.
1. Back of the cover is crystal clear.
2. Rubber around the phone, protecting corners (Apple bumper like, but not as thick)\
3. Slips in and out of pocket nicely.
4. All access to buttons and ports...no problems.
5. Perfect fit and finish (imo)
6. No problems with dirt and/or grime.
7. Don't use a dock so can't speak for this, but I will say on/off works easily.
8. Protects the corners....and the clear back protects the back of the phone from scratches.
9. Unfortunately, no docking with case on.

Similar Messages

  • Case for PC build

    Hi!
    After some help and suggestions from this forum I’m already to start building my new computer. I’m just looking for my ideal PC case can be Tower or a midi (depth should not exceed 510mm).                 I have seen the Corsair Graphite Series™ 600T which would be my ideal except the depth is slightly deeper than I wanted. If anyone can suggest a case with the same features I would appreciate it, if not I will have to go with the 600T or the taller 700 even though it is slightly deeper than I wanted.
    Mike

    Thanks Harm
    Yes you are right I know it, problem is not so much the Height as the depth of the case. This should ideally be less than 520mm (21.47”) in order to fit the space I planned for it. The Corsair T600 along with their larger versions (Obsidian 700D & 800D) has the correct specification, everything I need in a case. If I am unable to find a case with a similar spec, I will go for the larger Obsidian 700D case and just to have to find a new home for it.
    Mike

  • Caution iP4 User's Using InvisiShield Screen Protector

    The great news is that the below mentioned vendor's are aware of what I am about to share and are fully supporting iP4 User's incurring the issue at no additional charge.
    My caution involves those iP4 users having installed the InvisiShield screen protector and attempting to place the iFrogz Luxe case on it, as it will inadvertently lift-up the lower corners of the Shield protector when you slip it into the lower portion of the case. Again, iFrogz is aware of this and are working on a retro fit of this case, issuing Gift Codes for replacement cases when they are available.
    Note: according to iFrogz, the issue has to do with the internal felt padding in their Luxe case being a little too thick, creating this issue. I used the iFrogz Luxe with my iP3G for two years and its a Great Case, leading me back to it again with my iPhone.
    As for InvisiShield, they honor their life time warranty and replaced my Shield at No Cost to me, and I still highly recommend their "Shield" as the best screen protector on the market.
    With that, I have also discovered another iP4 case which was even more devastating to my InvisiShield screen protector, which was the Belkin Grip Edge Premium Leather Shell case. Another great fitting and feel case for the iP4, leaving one to feel like there isn't an attached case. Unfortunately, the CS person I spoke with at Belkin was rather flippant leaving me the impression I can return the case, period, which I will reluctantly do since it is a great case otherwise, but doesn't meet my need for the InvisiShield.
    For those wondering why I attempted the purchase of another case... I just feel better having my iP4 protected in a streamlined case despite carrying it in my pocket. Unfortunately, despite explaining my concerns with the AT&T store rep having the Shield on it, he impulsively inserted my iP4 into the Belkin case with reckless abandon, lifting all sides of my InvisiShield protector...
    Although creating another nuisance for me in having to replace my Shield again, I must reinforce that AT&T will reimburse both cases without question.
    If your still with me... there is one more cautionary note I want to pass along to those of you awaiting the APPLE "bumper" to be shipped and that is, Prior to randomly placing it on your iP4 with the Shield on it, beware that APPLE may not have (nor should they have to take into consideration) considered the potential consequences of those users with Shield protectors on their iP4's when applying these cases or bumpers... so be very careful before placing your bumper case on your iP4. I have submitted this as Feedback to APPLE today as a caution and hopefully they will add this to a Disclosure note to the rest of us. I love my iP4 and have not experienced any of the expressed antenna issues reported... seeing I am still using it without a case and bumper awaiting my iFrogz case

    ironman1964, great observation... I personally believe there are a few variables coming into play... with the glass you mentioned being one and the chrome lip... although, having just looked at my teens 3GS, I noted the chrome lip your referring to, but there is still a small portion of the 3GS's screen which rises above the chrome which would be exposed to the sides of the Luxe case. All kinda mute when we compare against the new design of the iP4... so I am focused more on the glass top of the new iP4 screen as being incompatible with Zagg Shield? I must admit some ignorance here, but is the iP3(3GS) screen made up of a different material than the iP4? (non-glass vs glass??). If so, may be a whole other variable to consider why the Zagg Shield isn't adhering as well on the iP4, thereby not withstanding the slightest torque on its sides and corners??
    The "tight fit" you refer to is due to the internal padding on the iFrogz Luxe case doesn't help... so hopefully their thinning it down will resolve the issue without compromising a snug enough fit in holding the phone in place.
    The other observation I made was that the front side edges of the Luxe case "wraps around" more of the front edges of the iP4 then its 3G/GS predecessor Luxe case, which encroaches more on the Zagg Shield, causing its lower corners to catch and lift as you slide the iP4 into the lower section... likewise the top section of the iFrogz case doing the same when you slide it onto the iP4.
    I have my iFrogz "gift code" and just waiting for their newly released retro-fitted case. I will report again as soon as I receive it. Thanks for replying back

  • Can I share an iCloud photo library between family members?

    I am searching for a good photo and video backup solution for my entire family. Previously I was backing up photos to a Windows Home Server that did a weekly backup to Amazon's cloud storage; however, the system disk on the home server blew up, leaving me with the choice to rebuild the server or alternatively look for a more seamless, lightweight solution. I have large quantity of photos and video on a home server data drive, as well as a large number of photos that have been recently scanned to my MacBook pro. We have almost completely transitioned to Apple products and use Time Machine for Macbook backups, so that function of the Home Server is no longer useful.
    I like the concept of Apple's iCloud Photo Library; however, I need a solution that allows photos from at least my wife's and my devices to be seamlessly backed up to the library. We use different iCloud IDs. The ideal use case would be that photos taken on any of our devices (across a set of iCloud IDs) would be shared within a single library.
    Family Sharing allows you to share a number of things across family members, but currently the photo library doesn't appear to be one of them.
    Any pointers on how to proceed? Will the OS X Photo app and the move to bring iCloud library to the Mac make any difference? Any alternative solutions?

    Someone else can probably answer the family sharing question better than I can, but I wanted to respond with an alternate suggestion. I didn't find the features I wanted in family sharing, so I didn't give it much of a chance. It did seem like you'd have a separate album for each family member.
    I just recently started using Flickr and it's great. I looked into it a few years ago and there were limitations on how many MB's you could upload per day and I don't remember being impressed by the amount of free storage. Now Flickr gives you 1000 GB for free and stores the image in the original quality and size. If you download the Flickr app on your phone, you can set it to automatically upload all of the new pictures on your iphone, ipad, etc. If you and a family member share a Flickr account, you can both auto sync your camera rolls into the same Flickr account.
    Personally, I'm using Flickr as a way to automatically back up photos for free. I quickly ran out of free icloud storage and I haven't wanted to pay for more.In Flickr, the auto uploads from your phone are set to private by default, but you could use flickr to share albums with friends who don't own any apple products... unlike icloud photo sharing, that only allows your friends with an apple device to view shared albums.

  • When or is it possible that Media Encoder and Encore will get GTX 680 CUDA support?

    ...and will it really make a difference?
    Ive got the Creative Cloud deal, so my software is up to date (CS6) as of 8/16/12, and after installing my new GTX 680 4GB card and enabling it in Premiere and After Effects, I see a nice performance gain over my older GTX 550 ti 1.5GB.
    When I drop a finished H.264 Blu-ray file into a sequence and export it from Premiere into AME, my graphics card shows about 40-60% regular usage throughout the encoding, no matter what format I am exporting to. But if I just drop the same original H.264 file into AME and set it to the exact same encode output preset, then the graphics card is not used at all.
    Also, when you run GPUSniffer in the Encore program folder, you can see that it will not accept this card for GPU utilization. Ive looked all around for the Encore list of cards database it is pulling from, and Ive concluded that it must be in a compiled DLL file, so unlike Premiere, you cant modify it to detect the GPU. AME does not have either the list of supported cards or the GPUSniffer program in its program folder, so both of these programs will need the GTX 680 added to its list of supported cards by Adobe directly with an update, like they did with After Effects CS6.
    I can only imagine that transcoding in either programs would benefit from CUDA accelleration, like a few other transcoding apps out there already use.
    If Adobe is listening, the #1 thing I (all of us?) want in CS6 is for all transcoding products (Premiere direct export & projects, After Effects, Media Encoder direct file import, Encore) to utilize GTX 670 680 and 690 cards for accelleration out of the box. I dont want to use some 3rd party transcoder to get GPU aided transcoding, I want my CS6 Master Collection to be able to do it. Not too much to ask right?

    Hi, I have written a 'proof-of-concept' (i.e. unsupported beta) NVidia H264-encoder plugin for Adobe Premeire Pro CS6. If you are interested in trying it, go the following forum-post:http://forums.adobe.com/message/5458381
    Couple of notes:
    (0) GPU requirement: NVidia "Kepler" GPU or later (desktop GTX650 or higher, laptop GT650M or higher.)  The plugin uses the new dedicated hardware-encoder (NVENC) introduced with the 2012 Kepler GPU.
    (1) the "ideal" best-case speedup (over Mainconcept H264) is roughly 4-5x on a consumer desktop PC (single-socket PC, Intel i5-3570K).  Naturally actual results vary on the source-video/render.
    (2) Interlaced video encoding is NOT supported.  (I couldn't get this to work; I think it's a GPU-driver issue.)  
    (3) Only uncompressed PCM-audio is supported (no AAC/AC-3 audio.) Also, multiplexing is limited to MPEG-2 TS.  If you want to generate *.MP4 files, you'll need to do your own offline postprocessing outside of Adobe.
    (4) In terms of picture-quality (artifacts, compression efficiency), NVidia hardware (GPU) encoding is still inferior to software-only solutions (such as Mainconcept or x264.)
    In short, don't expect the NVENC-plugin to replace software-encoding any time soon.  It's not a production-quality product.  And even if it were, software-encoding still has a place in a real workflow;  until consumer hardware-GPU encoding can match the video-quality of the Mainconcept encoder, you'll still be using Mainconcept to do your final production video renders.
    cuda is meant for general computing including encoding.
    if you performe h246 encoding using cuda on gtx680 it can be done 1 minutes for a video that takes 90 minutes on i7 cpu.
    At that speed, the output of the hardware-encoder would be so poor, the video may as well be disposable.  NVENC is NOT faster than Intel Quicksync; actually Quicksync can be substantially faster.  But NVENC (currently) holds the slight edge in compression-quality.
    Off on a tangent, CUDA in MPE acceleration offers both speed-advantage and quality-advantage, because the CUDA video-frame processing is highly parallelizable, and can exploit the GPU's numerous floating-point computational-arrays to speed-up processing and do more complex processing.  That's a double-win.  So what does this have to do with encoding?  Right now, hardware video-encoding (which comes after the video-rendering step) only offers improved speed.  My experience with NVENC has shown me it does not improved video-quality.  At best, it is comparable to good (and slower) software-encoding when allowed high video-bitrates.  At lower video-bitrates (such as Youtube streaming), software-encoding is still A LOT better.

  • Use of AFCS in commercial applications

    I have developed a flex application that a client has already
    commercially purchased from me, before the public beta release of
    cocomo, now AFCS.
    Am I legally permitted to use AFCS features in the latest
    update of this application for the client to trial if I am not
    charging them any extra for the addition of these features until
    AFCS is commercially released?
    cheers

    Hi,
    The way I could imagine a solution based on JMS would be to use one Queue per user. A message is sent to Q1 (user1) and is then processed by your user1 client application. This client application has somehow the knowledge of the organizational hierarchy and could decide after having processed the message to which user to send it (i.e. to which queue). Note that this is an ideal use-case scenario for MDBs especially if your client applications still need to perform database accesses.
    Hope it helps
    Arnaud
    www.arjuna.com

  • Use of JMS in Workflow applications

    Hi,
    I am developing a workflow engine in a J2EE project which sends the file from one person to another. Currently this scenario is handled by a scheduler which looks out for the tasks in the database and for each task, depending upon the current user, determines the next user by an organizational hierarchy to whom the task should be assigned and updates a flag in the database in such a way that the task is hidden for the first person and becomes visible for the second person. It is a database update that happens via a scheduler. I wanted to know if JMS is the right choice to be used to perform the same task. If so then how and why ?
    Cheers,
    Sirisha

    Hi,
    The way I could imagine a solution based on JMS would be to use one Queue per user. A message is sent to Q1 (user1) and is then processed by your user1 client application. This client application has somehow the knowledge of the organizational hierarchy and could decide after having processed the message to which user to send it (i.e. to which queue). Note that this is an ideal use-case scenario for MDBs especially if your client applications still need to perform database accesses.
    Hope it helps
    Arnaud
    www.arjuna.com

  • Segmentation fault when using snapshot isolation with Berkeley DB 6.1.19 and 5.1.29

    Hello,
    I have been experimenting with snapshot isolation with Berkeley DB, but I find that it frequently triggers a segmentation fault when write transactions are in progress.  The following test program reliably demonstrates the problem in Linux using either 5.1.29 or 6.1.19. 
    https://anl.app.box.com/s/3qq2yiij2676cg3vkgik
    Compilation instructions are at the top of the file.  The test program creates a temporary directory in /tmp, opens a new environment with the DB_MULTIVERSION flag, and spawns 8 threads.  Each thread performs 100 transactional put operations using DB_TXN_SNAPSHOT.  The stack trace when the program crashes generally looks like this:
    Program received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7ffff7483700 (LWP 11871)]
    0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    (gdb) where
    #0  0x00007ffff795e190 in __memp_fput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #1  0x00007ffff7883c30 in __bam_get_root ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #2  0x00007ffff7883dca in __bam_search ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #3  0x00007ffff7870246 in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #4  0x00007ffff787468f in ?? () from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #5  0x00007ffff79099f4 in __dbc_iput ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #6  0x00007ffff7906c10 in __db_put ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #7  0x00007ffff79191eb in __db_put_pp ()
       from /usr/lib/x86_64-linux-gnu/libdb-5.1.so
    #8  0x0000000000400f14 in thread_fn (foo=0x0)
        at ../tests/transactional-osd/bdb-snapshot-write.c:154
    #9  0x00007ffff7bc4182 in start_thread (arg=0x7ffff7483700)
        at pthread_create.c:312
    #10 0x00007ffff757f38d in clone ()
        at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    I understand that this test program, with 8 concurrent (and deliberately conflicting) writers, is not an ideal use case for snapshot isolation, but this can be triggered in other scenarios as well.
    You can disable snapshot isolation by toggling the value of the USE_SNAP #define near the top of the source, and the test program then runs fine without it.
    Can someone help me to identify the problem?
    many thanks,
    -Phil

    Hi Phil,
       We have taken a look at this in more detail and there was a bug in the code.   We have fixed the bug.     We will roll it into our next 6.1 release that we do.   If you would like an early patch that will go on top of 6.1.19, please email me at [email protected], reference this forum post and I can get a patch sent out to you.   It will be a .diff file that apply on the source code and then rebuild the library.  Once again thanks for finding the issue, and providing a great test program which tremendously helped in getting this resolved.
    thanks
    mike

  • Valuation Area at Company Code level

    Hi,
    In what ideal business case/industry one should set valuation area at company code level when SAP recommends valuation area must be set at Plant level ? in other words..what is the practical purpose of Valuation area at company Code level?

    Hi
    Think that valuation area link with records in table MBEW. So, if you have a company with 3 plants, and you have a single valuation for all company, in 3 plants the valuation of materials are the same, with their implications in valuation of stock and valuation of margin in sales (remember that this value can flow -really, use to flow- to condition VPRS in SD, well see SAP Note 1365939 - VPRS logic and Customizing settings in SD and SAP Note 547570 - FAQ: VPRS in pricing for more details). If you do a valuation different by plant, stock and margin calculations are different.
    What are the elements that you must considerer?
    1. If the materials is purchased, think if the same material have different conditions (different conditions in MM, for instance, indirect costs as shipment costs, different vendor with meaningful differences in prices or discounts). It's indiferent that it is a raw material or a material that you can sell. The raw material will work in product costing as an item in BOM. First of all, considerer the impact in stock value, second think how is delivered the materials to customer from plants and their impact in profitability analysis. If there are meaningful differences for the same material between different plants and you give the same value in all company, depending of the mix perhaps your customer is fooling himself about this issue.
    2. If the material is manufactured, talk with your CO consultant and what kind of impact could have different routings, BOM and prices of raw materials between plants. Same considerations for valuation of stocks and margin.
    Finally, in the page 10 of the document you can read
    Valuation must be at Plant level in the following cases:
    u2022If you want to use the application component Production Planning (PP) or Costing
    u2022If your system is a SAP Retail system
    I hope this helps you
    Regards
    Eduardo
    PD: I forgot, talk this with the CFO of the company, taking in account the restrictions (when it must be plant level).
    Edited by: E_Hinojosa on Nov 23, 2011 9:23 AM
    Edited by: E_Hinojosa on Nov 23, 2011 9:24 AM

  • Webdynpro components integration procedure

    Hi, Any Body give me the suggestion, how to integrate the diffrent components into the one project.Like Each programmer creating their own project under project different webdynpro component created. Finally , to integrate all the component in one one project from different project. Plz give the solution how to integrate .
    My basic question is , what is the procedure to follow to integrate the webdynpro components in one project.

    If in the case you have not created the DCs and proceeded with simple web dynpro projects, you can copy individual components in single project. There you can use used webdynpro components concept and have data exchange using interface controller.
    A lot of efforts are required for this in case you modify the components, as you need to copy the inner components again and again!
    Ideally in case of team development you should use JDI. Please follow scenario 2+ for the same.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/d2/4551421705be30e10000000a155106/frameset.htm
    regards

  • Branched code deliveries through Oracle E-Business Suite patches

    Hello,
    Occasionally, we find quite some E-Business Suite patches delivering branched versions of code. This is reflected in the readme document, and is understood that the fix/change in the branched version may not be present in the subsequent patch(es). Additionally, the readme suggest contacting Oracle Support when these files are re-delivered through the patches (to ensure that old fix and the new fix co-exist).
    Can someone shed some information on the maintenance aspects of such branched versions of code. I cannot think of any other approach than maintaining a manual record of such branched code components (files) and/or adding these in applcust.txt.
    Anyone having a different approach to managing these ?
    Thanks.

    Hi Rakesh,
    I'd use applcust.txt and / or a script that parses the patch driver files.
    I've blogged about use a wrapper around adpatch, and this is an ideal use case for it so it could stop before even calling adpatch
    http://garethroberts.blogspot.com/2007/07/apps-dba-wish-adpatch-under-my.html
    Regards,
    Gareth
    Blog: http://garethroberts.blogspot.com
    Web: http://www.virtuate.com

  • Driver performance with CLOB

    Hi,
    I use the "thin" driver with Apple's WebObjects and I noticed that when I do a fetch on a table with a CLOB column, it take more than one minute to get the results back when the result set is more than 500 rows. So I did some tests with SQLGrinder, a SQL tool that also uses JDBC to talk to databases, and it's the same slowdown. But if I remove the CLOB column, the fetch takes less than 3 seconds. Does Oracle's JDBC driver is know to be slow in those situations ? I use the 9.2.0.5.0 driver with a 1.4 JDK (Apple supplied on Mac OS X 10.3).

    Hi,
    How big is the data that is being stored in the CLOB column. Ideally in case of LOBs and BFILEs the data is read either in chunks or as streams and that needs to be separately coded. I am not sure if fetching the column directly using a select would be fast.
    You can try selecting the columns other then the CLOB column and see the performance.
    Here are some benchmarck results published on asktom. You might be interested in it.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:6959536613259786285::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1310802216301
    Thanks,
    Khalid

  • Merge records

    I have a need to merge records received from two different services in BPEL. Here's the requirement - A BPEL process invokes two different services. One of the services places a fixed-length flat file in the file system. The other service responds with a set of XML records. The XML records need to transformed and merged with those in the flat file and eliminate duplicate records (in terms of an ID field) while doing so. What could be an efficient way to do this? The XML file is at least 80MB in size. Will XSLT be memory intensive?

    Not ideal use case for SOA Suite but achieveable.
    If they are licensed for SOA Suite they can use OSB, this may be a better tool for this use case.
    your xquery can look something like this:
                            let $Runners := $runners
                            let $Odds := $odds
                            return
                                <Runners>
                                        for $Runner in $Runners/ns1:Runner
                                          for $Odd in $Odds/ns0:RunnerOdds/ns0:RunnerOdd
                                           where $Odd/ns0:Runner_Num = $Runner/ns1:Runner_Num
                                        return
                                            <Runner>
                                                <WIN_Odd>{ data($Odd/ns0:WIN_Odd) }</WIN_Odd>
                                                <Runner_Num>{ data($Runner/ns1:Runner_Num) }</Runner_Num>
                                                <PLC_Odd>{ data($Odd/ns0:PLC_Odd) }</PLC_Odd>
                                                <Runner_Name>{ data($Runner/ns1:Runner_Name) }</Runner_Name>
                                                <Jockey_Name>{ data($Runner/ns1:Jockey_Name) }</Jockey_Name>
                                                <Runner_Status>{ data($Runner/ns1:Runner_Status) }</Runner_Status>
                                            </Runner>
                                </Runners>
                        } what this code does is merges 2 payloads, 1 payload has the name of horses, the other payload has the odds of the horse. You can merge using the where clause and linking via the key of Runner_num
    cheers
    James

  • Performance Management Plan - Customization

    Hi
    This is regarding the Performance Management Plan in Oracle Performance Management. Currently my client uses a custom concurrent program which creates appraisals by calculating the job anniversary date. The client is using yearly appraisal. But the program is scheduled to run every two weeks and picks the teammates who have completed their job anniversary in that period and creates appraisals for them.
    There is no goal setting or tracking process implemented currently. The client is not using PMP now. I would like to propose PMP to the client. The PMP would create appraisals based on the hierarchy - Supervisor, supervisor Assgn, Position, Organization.
    If a teammate is hired on 01-jan-2008 the first appraisal should be in 01-jan-2009 as per the client req now. If PMP is used it would create an appraisal in the year 2008 itself-- which is wrong as per the client requirement. Is there an option in PMP to exclude him in the current PMP and include in the next year's PMP?.
    Thanks in advance
    Regards
    Ashish

    Hi,
    Using the PMP plan is ideal in case where the appraisal period is same across the organization. In organizations where the appraisal period differs depending on factors like hiredate, the PMP would be of a big maintenance. You will have to create PMP for each individual employee in the organization.
    I suggest instead use objective assessment also in your appraisals and not use the PMP.
    Thanks,

  • Proper Partitioning for a table

    Dear Netters,
    We have a table that is defined as follows:
    CREATE TABLE RECORDSINGLECHOICEVALUE
      RECORDFK        RAW(16)                        NOT   NULL,
      CHOICEFK        RAW(16)                         NOT   NULL,
      FIELDFK         RAW(16)                           NOT  NULL,
      SOURCEENTITYFK  RAW(16)                   NOT   NULL
    CONSTRAINT RDSINGLECHOICEVAL_PK PRIMARY KEY (RECORDFK, FIELDFK)In it, we store GUIDs that reference other tables in the application.
    There are generally the following types of queries that use the table:
    SELECT COUNT(DISTINCT t1.SourceEntityFk)
    FROM RECORDSINGLECHOICEVALUE t1
        INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
               t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
               t1.RecordFk = t2.RecordFk                                    ---- .... AND to the same Record
               AND t2.FieldFk = {some other guid value}
    WHERE t1.FieldFk = {some guid value}                  -- always a single
       AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalor
    SELECT COUNT(DISTINCT t1.SourceEntityFk)
    FROM RECORDSINGLECHOICEVALUE t1
        INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
               t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
               AND t2.FieldFk = {some other guid value}
    WHERE t1.FieldFk = {some guid value}                  -- always a single
       AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalThe table could be joined to itself multiple times.
    For partitioning, we used HASH partition on FieldFk (128 partitions were created), since this is a scalar that participates in 99% of the queries against the table. However, due to the nature of the data, some of the partitions are heavily skewed (one field is more prevalent than others), resulting in some partitions having < 10k rows, and others having > 200M rows.
    Would you recommend an alternative partitioning schema? Sub-partitions?
    Thank you in advance.
    --Alex                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    >
    The table in question (and we have a few other ones very similarly defined), participates in many queries against the database. Queries can be formed in such a way that the user can pick the Field (FieldFk) and (optionally) ChoiceFks at will. This is a highly flexible user-driven query engine. Table(s) can be joined many times within the same query, resulting in sub-optimal performance.
    The goal is to come up with the schema (partitioning/indexing/any other) that will support positive user experience. The 200M rows in a single partition was an example of when things start breaking lose. In the near future, this number can grow at least 10x.
    To clear up a business case, imagine human subjects, which have genetic variants. Say, there are 100 million people in the database (EntityFk). They all have 23 chromosomes, about 20,000 protein producing genes of interest (460000 combinations), and these have genetic variations (say, 10000) of different types (types are defined as ChoiceFk).
    The query would then try to identify subjects that have a specific type of gene variation (Field = "Gene variation", Choice = "Fusion"), and are males (Field = "Gender", Choice = "Male"), and have been diagnosed with a specific disorder (Field = "Diagnosis", Choice = "Specific Disorder"), and that have a recording of treatment (Field = "Treatment", choice is NOT specified) in the database. So, the table is getting joined onto itself in a few different ways, any many times (sometimes as many as 10).
    With stats in place, with index covering on Entity + Field + Choice (in all possible combinations thereof), with hash partitioning on Field alone (keys are GUIDs, so range partitioning, while possible, is kind of counter-intuitive), performance is suffering with increasing volume.
    We are evaluating other options, for different partition keys, indexing, and anything else in between.
    Any suggestions are much appreciated.
    >
    Thanks for the additional information. From what you describe it sounds like a classic use case for more of a star-schema architecture or am I still missing something?
    To see what I am talking about take a look at my extensive reply in this thread from a year ago
    Re: How to design a fact table to keep track of active dimensions?
    Re: How to design a fact table to keep track of active dimensions?
    Posted: Mar 18, 2012 7:13 PM
    I provided example code that should give you the idea of what I mean.
    For use cases like this bitmap indexes are VERY efficient. And since you said this:
    >
    The problem is performance. Maintenance side of the house is minimal - data is loaded from an external source, once every X days, via ETL, and so that is not a concern.
    >
    you should only have to rebuild/update the bitmap indexes every X days also. The main drawback of bitmap indexes is the performance when they are updated. They are NOT appropriate for OLTP systems but for OLAP where the index updates can be done offline in batch mode (or rebuilt) that is an ideal use case.
    You can easily conduct some tests using the example code I provide in that thread link as a template.
    In my example the attributes I used were: age, beer, marital_status, softdrink, state, summer_sport.
    You would use attributes like: Gene variation, Gender, Diagnosis, Treatment.
    Bitmap indexes store a bit for NULL values also so you could use NULL to indicate NO TREATMENT.
    Your goal would be to construct a query that uses a logical combination of your attributes to specify what you are interested in. Then, as you can see by the plans I posted, Oracle will take it from there and perform bitmap index operations using ONLY the indexes. This is one sample query I provided:
    SQL> select rowid from star_fact where
      2   (state = 'CA') or (state = 'CO')
      3  and (age = 'young') and (marital_status = 'divorced')
      4  and (((summer_sport = 'baseball') and (softdrink = 'pepsi'))
      5  or ((summer_sport = 'golf') and (beer = 'coors')));Your query would use your attribute names and values. Notice also that there are no multiple joins to the same table, although there can be if necessary without preventing Oracle from using the bitmap indexes efficiently.

Maybe you are looking for

  • Unchange Free Goods Quantity in sales order

    Dear SD Expert,              I have set free goods condition for example buy 10 and get 2 free. Program work perfectly. However, when user change free goods item from 2 to 3. System show the information message V1737. And user still can change the qu

  • How do you sort the entire table in numbers with OSX Maverick?

    Before the upgrade of Numbers 3.0 & Maverick, you use to sort the enire table by (1) Highlighting the entire table, (2) using the drop down arrow in the columns, (3) select the option to set your sort requirements. Now, I am not able to find a way to

  • Need to write a report to show who is using SAP and to what extent

    Hi Experts, I need to write a report which should display who are using SAP and to what extent to forcast future SAP license needs... and to ensure the compliance by allocating license types correctly. My report should helpfull to organization to red

  • Dnyamic Custom Font Embedding To Textflow

    Hello all, I am creating a browser based editor application using the Rich Editable Text control. I have a user defined set of fonts in different swf files. I am loading those swf files at run-time. Here is the part of code below. // Registers the fo

  • Error when loading web page when connected to wifi

    Whenever i go to load a new page on the safari an error comes up saying, "Cannot Open Page Safari cannot open the page. The error was: "Operation could not be completed. Invalid argument." I have tried restoring my ipod and reseting the network setti