Locale aware Tree labelFunction

Hi all
I'm in need to implement tree labelFunction that is locale aware, here is the example :
class Localization{
public   function setLocale(locale:String):void{
     this.dispatchEvent(localizationChangeEvent);
     [Bindable(event="localizationChange")] 
     public function getLabel(key:String):String{
          return label according to locale
<mx:Script>
<![CDATA[
    private function myLabelFunc(item:*):String{
          return  localization.getLabel(item.data);
]]>
</mx:Script>
<mx:tree labelFunction="myLabelFunc"  dataProvider=....../>
My problem is that  when locale is changed i don't see labels change to selected language...
Please advise

I share your confusion - especially since this behaviour was actually changed between 1.2 and 1.3. I found the previous behavior (where default locale did not come into the picture) much more intuitive and useful.
Especially in situations where you create an application that will run in several geographical locations worldwide you should be able to expect the same behavior, no matter which box it happens to run on (en_US vs. ja_JP system locale, for instance).

Similar Messages

  • A PKGBUILD that helps you compile kernel from local source tree

    I don't know if someone did this beofore. Hours ago I wrote a PKGBUILD file for compiling kernel,
    it is different than the one from abs. It allows you
    compile a kernel from a exiting kernel source tree and leave it clean.
    honor the Archway, this means you have a clean filesystem
    It is acutally because I'm currently playing with The Eudyptula Challenge.
    and I'm tied our compress/decompress a kernel tree all the time. If you are kernel developer, you
    may also find it useful.
    The PKGBUILD file worked on my machine, I will add headers and docs later.
    Oh, almost forgot: here is my PKGBUILD:
    #So we will have a clean src tree
    pkgbase=linux-test
    _kernel_bin=kernel_build
    #the variable you have to provide
    _builddir=kernel_build
    kernel_src_dir='/home/developer/Courses/kernel-base'
    _srcname=kernel_tree
    #end the variable you have to provide
    pkgver=3.8.1
    pkgrel=1
    pkgdesc="The Linux kernel and modules"
    depends=('coreutils' 'linux-firmware' 'kmod' 'mkinitcpio>=0.7')
    makedepends=('xmlto' 'docbook-xsl' 'kmod' 'inetutils' 'bc')
    optdepends=('crda: to set the correct wireless channels of your country')
    provides=("kernel26${_kernelname}=${pkgver}")
    conflicts=("kernel26${_kernelname}")
    replaces=("kernel26${_kernelname}")
    arch=('i686' 'x86_64')
    url="http://www.kernel.org/"
    license=('GPL2')
    source=(#if we provide this, means kernel compile progress is already done
    "${_kernel_bin}.tar.xz"
    'linux.preset'
    sha256sums=('65847bc847344434657db729d2dde4a408e303ea29ae1409520cecee8da6fc3d'
    '2c2e8428e2281babcaf542e246c2b63dea599abb7ae086fa482081580f108a98')
    #this one strip the linux off
    _kernelname=${pkgbase#linux}
    prepare() {
    #XXX:checked
    #build dir has to be the same as kernel_bin files, then builddir is created
    #automatically by tar
    if [ "${kernel_src_dir}" == "" ];then
    return 1
    fi
    #provide kernel source tree for compile and move modules
    ln -s ${kernel_src_dir} ${srcdir}/${_srcname}
    mkdir -p "${srcdir}/${_srcname}"
    #we need to check here if there exist kernel bin files
    if [ "${_kernel_bin}" == "" ]; then
    make O="${srcdir}/${_builddir}" menuconfig
    fi
    build() {
    #XXX:checked
    cd "${srcdir}/${_srcname}"
    #we need to check here if there exist kernel bin files
    if [ "${_kernel_bin}" == "" ]; then
    #return 1
    make O="${srcdir}/${_builddir}" bzImage modules
    fi
    #otherwise this step is done already done
    _package() {
    #we dont need to worry about mkinitcpio, depmod thing, They are done by
    #install script, we need to provide a preset and install file instead.
    #we build kernel objs on _builddir, and install them in pkgdir
    #install binary files, this means we have a compiled binary tree
    cd "${srcdir}/${_srcname}"
    #echo "$(pwd)"
    KARCH=x86
    install=linux.install
    # get kernel version
    _kernver="$(make O="${srcdir}/${_builddir}" kernelrelease)"
    _kernver=$(echo "${_kernver}" | sed -n 2p -)
    #strip the -dirty away
    _kernver=${_kernver%-*}
    _basekernel=${_kernver%%-*}
    _basekernel=${_basekernel%.*}
    mkdir -p "${pkgdir}"/{lib/modules,lib/firmware,boot}
    make O="${srcdir}/${_builddir}" INSTALL_MOD_PATH="${pkgdir}" modules_install
    cp "${srcdir}/${_builddir}"/arch/$KARCH/boot/bzImage "${pkgdir}/boot/vmlinuz-${pkgbase}"
    # set correct depmod command for install
    cp -f "${startdir}/${install}" "${startdir}/${install}.pkg"
    true && install=${install}.pkg
    sed -e "s/KERNEL_NAME=.*/KERNEL_NAME=${_kernelname}/" -i "${startdir}/${install}"
    sed "s/KERNEL_VERSION=.*/KERNEL_VERSION=${_kernver}/" -i "${startdir}/${install}"
    # install mkinitcpio preset file for kernel
    install -D -m644 "${srcdir}/linux.preset" "${pkgdir}/etc/mkinitcpio.d/${pkgbase}.preset"
    sed \
    -e "1s|'linux.*'|'${pkgbase}'|" \
    -e "s|ALL_kver=.*|ALL_kver=\"/boot/vmlinuz-${pkgbase}\"|" \
    -e "s|default_image=.*|default_image=\"/boot/initramfs-${pkgbase}.img\"|" \
    -i "${pkgdir}/etc/mkinitcpio.d/${pkgbase}.preset"
    # remove build and source links
    rm -f "${pkgdir}"/lib/modules/${_kernver}/{source,build}
    # remove the firmware
    rm -rf "${pkgdir}/lib/firmware"
    # gzip -9 all modules to save 100MB of space
    find "${pkgdir}" -name '*.ko' -exec gzip -9 {} \;
    # make room for external modules
    ln -s "../extramodules-${_basekernel}${_kernelname:--ARCH}" "${pkgdir}/lib/modules/${_kernver}/extramodules"
    # add real version for building modules and running depmod from post_install/upgrade
    mkdir -p "${pkgdir}/lib/modules/extramodules-${_basekernel}${_kernelname:--ARCH}"
    echo "${_kernver}" > "${pkgdir}/lib/modules/extramodules-${_basekernel}${_kernelname:--ARCH}/version"
    # Now we call depmod...
    #echo "Call Depmod"
    cp "${srcdir}/${_builddir}/System.map" System.map
    depmod -b "${pkgdir}" -F System.map "${_kernver}"
    #echo "Called Depmod"
    # move module tree /lib -> /usr/lib
    mkdir -p "${pkgdir}/usr"
    mv "${pkgdir}/lib" "${pkgdir}/usr/"
    # add vmlinux
    install -D -m644 "${srcdir}/${_builddir}/"vmlinux "${pkgdir}/usr/lib/modules/${_kernver}/build/vmlinux"
    pkgname=("${pkgbase}")
    for _p in ${pkgname[@]}; do
    eval "package_${_p}() {
    _package${_p#${pkgbase}}
    done
    and here is the address of it on github
    Last edited by xedchou (2014-12-23 12:41:55)

    Based on the title alone I almost reflexively binned this thread.  Please rename this thread to *something* relating to what you're actually posting.

  • Inbox and Drafts show unread message count of all folders within. Can I get the local folders to do this too?

    I'm filtering mail into folders in the local folder tree. There may be two or three levels, e.g., Local Folders -> Lists -> OS -> MacOSX. Mail that is unread in the MacOSX folder is counted and displayed when the MacOSX folder is visible. However, keeping the entire tree on display is a nuisance and it is long, so I don't see things ate the bottom. I have to scroll to reveal the rest of the items in the folder tree. For these reasons I'd prefer to have the folder tree in its minimal state.
    Is it possible to have the the top level folders display an unread message count for all the enclosed folders?

    when I collapse the folder tree of local folders, it shows the unread count for sub folders.

  • Local variable's VariableElement not accessible through treepath

    Hi,
    I'm extending TreePathScanner visitor to localize and handle certain annotations associated with variables of any kind (field, parameters and local variables).
    To do so I overrode the visitVariable method that, as far I understand, should be able to obtain the corresponding javax.lang.model's VariableElement for any variable (whatever its kind) based on the current three path, see code bellow.
    It does work well with fields and parameter, however with local variables Trees.getElement simply returns a null. All the information is in fact in the code tree VariableTree node and I could retrieve what I needed from it but rather refrain from using com.sun. ... API if not really necessary and of course implement the processing twice.
    Perhaps the javax.lang.model.... object tree is not built for elements within a method body? or is this a bug? com.sun.source.util.Trees#getElement javadoc only says that a null is returned if the element is not "available".
    Thanks in advance.
    import javax.lang.model.element.VariableElement;
    import com.sun.source.util.TreePathScanner;
    import com.sun.source.util.Trees;
    import com.sun.source.tree.VariableTree;
    import com.sun.source.util.TreePath;
    public class MyVisitor extends TreePathScanner<Object,Trees> {
         @Override
         public Object visitVariable(VariableTree node, Trees trees) {
              TreePath path = getCurrentPath();
              VariableElement ve = (VariableElement) trees.getElement(path);
              if (ve != null) {
                            // The case for parameters and fields.
                            process(ve);
              else {  // Funnily getElement return null for local variables
                            process(node);
              return super.visitVariable(node, trees);
    }

    I had a similar problem where trees.getElement(TreePath) was returning null. In my case it had to do with the fact that symbol resolution wasn't done yet. I was trying to invoke my TreePathScanner after calling javacTask.parse() but I had to switch to after calling javacTask.analyze() otherwise trees.getElement(TreePath) always returned null. My issues was with a TreePath for a method though.

  • Locale Date and Time display

    Hi,
    I don't know if this is possible with Flex. I would like to
    read user's regional settings (from his operating system) for
    displaying date and time and then display these values in according
    format. So how do you read those settings from the user's OS? Is
    there a way?
    thx in adv

    You cannot read OS setting using Flex. Maybe you can do it
    using AIR. For Locale aware date time format have you looked into
    Date.toLocaleString(),
    Date.toTimeString(),
    Date.toLocaleTimeString(),
    Date.toDateString(), and
    Date.toLocaleDateString()
    methods?

  • Importing sca into local development

    Hi,
    I need to create a web dynpro DC in my local development workspace. It is not connected to any track.
    I also want to use a DC from the LM-TOOLS.sca. How do I import this in my local develpoment file system. The only sca's that are present are th SAP-JEE, SAP_BUILDT, and SAP_JTECHS.
    On a track I know I can import it in the transport studio.
    But how do I import it on a local develpoment tree.
    thanks
    Padmaja

    Hi,
    If her intention is to create a webdynpro DC this link helps her
    http://help.sap.com/saphelp_nw04/helpdata/en/60/0a02403d62c442e10000000a1550b0/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/34/6b63c1a7ba6845b85239999b7061bc/content.htm
    To Importing Configurations via Local Configuration File
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/34353ed106ec2ce10000000a114084/content.htm
    hope this helps you!
    Regards,
    RK

  • How to obtain an object localized name?

    Hi everybody,
    I have a portlet that searches for folders and documents in the KD using IDocumentFolderManager and IDocumentManager objects. The problem is that when I use obj.getName(), I always get the object default name, It doesn't respect the user localization.
    This is part of the code:
    IDocumentQuery documentQuery = docMan.createQuery(folderID);
    documentQuery.setShowUnapproved(false);
    documentQuery.setSortProperty(ObjectProperty.Name);
    IObjectQuery childDocs = documentQuery.execute();
    for (int j = 0; j < childDocs.getRowCount(); j++){
    IObjectQueryRow document = childDocs.getRow(j);
    System.out.println(document.getName());
    Does anyone know how to obtain the localized name?
    Thanks!
    Claudia

    Ok, I give up. I tried this:
    <%@page import="com.plumtree.remote.portlet.*" %>
    <%@page import="com.plumtree.remote.prc.*" %>
    <%@page import="java.util.Locale" %>
    <%
         response.setLocale(Locale.FRENCH);
         IDocumentFolderManager folderMan = PortletContextFactory.createPortletContext(request, response)
              .getRemotePortalSession().getDocumentFolderManager();
         IObjectQuery subfoldersQuery = folderMan.getSubFolders(1);
         for (int i = 0; i < subfoldersQuery.getRowCount(); ++i) {
              out.write(subfoldersQuery.getRow(i).getName() + "<br/>");
    %>
    I got the primary name for the folder, not the French name. I tried setting my portal user's locale to French/France and that didn't help either. I even tried setting the portlet's primary language to French. None of that helped one bit.
    Then I saw this post from 2005:
    http://forums.bea.com/bea/thread.jspa?messageID=500018945&tstart=0
    Basically, I think this is just broken or not implemented. I would contact support and file an SR. Unless I'm missing something, it looks like the PRC is not locale-aware. And that's a bug, AFAIK.
    Chris Bucchere | bdg | [email protected] | http://www.thebdgway.com

  • Creating a local mirror of repository for home use

    Hi,
    I am thinking of syncing the whole arch mirror (at least current and extra). i could just download the whole directory from the net including the db file - though i am a bit scared that as it takes a while somebody could update something during the download process and the whole thing would not work anymore - due to a db file that doesnt work any longer.
    how could i do that as i try to avoid creating my own repo with gensync - i want the original one :-)
    thanks for any help provided!!!

    I'm not quite sure what you're asking.  But, I synced my machine with the Arch repo's using "abs".  I make my own packages in `/var/abs/local` and have my custom package repo on `/home/pkgs/`(using gensync).  Wheenver I want to customize an Arch package (in `/var/abs/local`), I just copy the entire directory from `/var/abs/<package>` to the local path.
    The actual "abs" sync only takes a matter of seconds, since it just downloads the PKBUILDS and any associated patches, scripts, etc, not the actual packages.
    There was only 1 occasion that I can remember when the Arch repos were not in sync with mine, right after I had just used "abs" in fact.  It was while rebuilding the "bash" package.  That was an easy enough fix though.  I posted the problem, and before I knew it, the current "bash" package was in sync again within an hour I think.  I think that's a very rare case indeed when I synced and the developers hadn't yet updated the repo with the change.  Either way, just keep resyncing your local "abs" tree and everything will be kept up to date.

  • Backup imap folders hierarchy/tree

    Hello,
    I have some imap accounts (like mobileme) set up in Mail. I have, for those imap accounts, imap folders.
    I would like to backup them every month. I can copy all mail from a imap folder into a local folder, but I have a folder tree like Folder1/sub-folder1/sub-sub-folder1 etc. So I am loocking for some "copy folder to local folder" capability in mail or with a third party app.
    As I have around 60 folders/subfolders it is not a solution to recreate/add new folder locally each time I want to backup and go through each folder to copy the folder's messages to the corresponding local folder.
    Also, to be able to do the reverse (ie move local folder tree to an imap account) would be nice.
    Any advice ?
    Marc
    Message was edited by: Marcojxjx

    Like DrClap said, plus...
    No, the Message-ID is not guaranteed unique. Messages can have no Message-ID, two messages can have the same Message-ID,
    and Message-IDs can change. All of these events are unlikely, but not impossible. Depending on how you're using the Message-ID,
    this may or may not be an issue for you.

  • How to access JNDI tree of Admin Server from Managed Server

    Hello,
    I created Managed and Admin Server for Domain.
    On Managed Server I use:
    InitialContext con = new InitialContext()
    It points to Managed Server local JNDI tree and
    Managed Server can't find JNDI tree of Admin Server.
    Looks like Managed Server is regular remote client of Admin Server.
    How to access JNDI tree of Admin Server from Managed Server?
    Thanks.
    Oleg.

    Hello,
    I created Managed and Admin Server for Domain.
    On Managed Server I use:
    InitialContext con = new InitialContext()
    It points to Managed Server local JNDI tree and
    Managed Server can't find JNDI tree of Admin Server.
    Looks like Managed Server is regular remote client of Admin Server.
    How to access JNDI tree of Admin Server from Managed Server?
    Thanks.
    Oleg.

  • Dynamically Loading xml in tree  control

    Hi I am new to flex i need to load the following xml into tree
    <?xml  version="1.0" encoding="utf-8" ?>
    - <catalog>
    - <product>
    <name>laptop3</name>
    <price>"1256"</price>
    <qty>"45"</qty>
    </product>
    - <product>
    <name>"CAR"</name>
    <price>"45000"</price>
    <qty>"7"</qty>
    </product>
    - <product>
    <name>"Laptop2"</name>
    <price>"450011"</price>
    <qty>"7022888"</qty>
    </product>
    - <product>
    <name>"Laptop"</name>
    <price>"45000"</price>
    <qty>"70"</qty>
    </product>
    - <product>
    <name>"Laptop2"</name>
    <price>"45000"</price>
    <qty>"7022"</qty>
    </product>
    - <product>
    <name>"Laptop2"</name>
    <price>"45000"</price>
    <qty>"7022888"</qty>
    </product>
    </catalog>
    I am unable to load the exact xml structure ...
    Please help me
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
        <mx:HTTPService id="srv" url="http://localhost:8080/SampleWebApp/test.jsp"/>
        <mx:DataGrid dataProvider="{srv.lastResult.catalog.product}" width="100%" height="100%"/>
        <mx:Tree labelField="@val" width="201" height="100%" showRoot="true"
        showInAutomationHierarchy="true" id="machineTree" dataProvider="{srv.lastResult.catalog.product}"></mx:Tree> 
        <mx:Button label="Get Data" click="srv.send()"/>
    </mx:Application>
    I am able to load into data grid , but not in tree help me

    The same as you have but with embeded XML.
    <mx:XML id="tstData">
    <catalog>
    <product>
    <name>laptop3</name>
    <price>"1256"</price>
    <qty>"45"</qty>
    </product>
    <product>
    <name>"CAR"</name>
    <price>"45000"</price>
    <qty>"7"</qty>
    </product>
    <product>
    <name>"Laptop2"</name>
    <price>"450011"</price>
    <qty>"7022888"</qty>
    </product>
    <product>
    <name>"Laptop"</name>
    <price>"45000"</price>
    <qty>"70"</qty>
    </product>
    <product>
    <name>"Laptop2"</name>
    <price>"45000"</price>
    <qty>"7022"</qty>
    </product>
    <product>
    <name>"Laptop2"</name>
    <price>"45000"</price>
    <qty>"7022888"</qty>
    </product>
    </catalog>
    </mx:XML>
    <mx:Tree labelFunction="treeLabel" width="201" height="100%" showRoot="true"
        showInAutomationHierarchy="true" id="machineTree" dataProvider="{tstData.product}">
    But it doesn't matter.
    try to debug. Add this
                   [Bindable]
                private var treeData:XMLListCollection;
                public function onResult(event:ResultEvent):void
                treeData = new XMLListCollection(XML(event.result).product); // debug here to see what you get and what is a type for this data
              <mx:HTTPService id="srv"  url="http://localhost:8080/SampleWebApp/test.jsp" result="onResult(event)"/>
    And treeData as a dataSource for Tree.

  • A very old package version is still in the ABS tree

    I wanted to add some compile-time options to the qemu package so I synced my local ABS tree and made a copy of the relevant PKGBUILD and its associated files:
    $ sudo abs
    $ cp -r /var/abs/extra/qemu/ ~/abs/local/
    However, I noticed that this particular qemu PKGBUILD is at version 1.4.2-2 whereas the qemu version available from the extra repo is currently at 1.5.1-2. 
    To be sure I wasn't mistaken, I synced my local abs tree twice and had someone else confirm that they too are seeing the qemu 1.4.2-2 PKGBUILD after updating his abs tree.
    Is this to be expected?  How often is the central ABS tree updated?

    WonderWoofy wrote:To add to the actual topic of the thread.  If you really need the updated pkgbuild (and any accompanying files), you can always use the packages web interface and follow the link to the git repo that holds all those stuffs.
    You need to be careful if there are versions of this package in testing and non-testing repos, as the web interface will point to the testing ones e.g. https://www.archlinux.org/packages/extra/i686/amule/ If you click 'Source Files' link in the top right and then view the PKGBUILD https://projects.archlinux.org/svntogit … ages/amule you will see it is for version 10803-3, even though we wanted version 10803-2 from [extra].
    For amule, there shouldn't be a difference, but for some other packages, there might be: https://bbs.archlinux.org/viewtopic.php?id=164759

  • Long running update

    Hello,
    Can anyone explain why this simple update statement against a single partition in large table (~300,000 rows, ~1GB in size for the single partition) is taking very long time. The most unusual thing I see in the stats are HUGE number of buffer gets.
    Table def is below, and there are 25 local b-tree indexes also on this table (too much to paste here), each on a single column residing in seperate tablespace than the table.
    I don't have a trace and will not be able to get one. Any theories as to the high buffer gets? A simple table scan (which occurs many times in our batch) against a single partition takes usually between 30-60 seconds. Sometimes the table scan goes haywire and I see these huge buffer gets, somewhat higher disk reads, and much longer execution time. There are less than 3 million rows in the partition being acted on, and only updating a couple columns, I simply cannot understand why Oracle would be getting a block (whether it was in cache already or not) over 1 BILLION times to perform this update.
    This is Oracle 11g 11.1.0.7 on RHL 5.3, 2 node RAC but all processing on instance 1 and instance 2 shut down at this point to avoid any possibility of cache fusuion issues.
    Elapsed
    SQL Id Time (ms)
    0np3ccxhf9jmc 1.79E+07
    UPDATE ESRODS.EXTR_CMG_TRANSACTION_HISTORY SET RULE_ID_2 = '9285', REPORT_CODE =
    'MMKT' WHERE EDIT_CUSIP_NUM = '19766G868' AND PROCESS_DATE BETWEEN '01-JAN-201
    0' AND '31-JAN-2010' AND RULE_ID_2 IS NULL
    Plan Statistics
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
    into the Total Database Time multiplied by 100
    Stat Name Statement Per Execution % Snap
    Elapsed Time (ms) 1.79E+07 17,915,656.1 2.3
    CPU Time (ms) 1.18E+07 11,837,756.4 2.5
    Executions 1 N/A N/A
    Buffer Gets 1.09E+09 1.089168E+09 3.3
    Disk Reads 246,267 246,267.0 0.0
    Parse Calls 1 1.0 0.0
    Rows 326,843 326,843.0 N/A
    User I/O Wait Time (ms) 172,891 N/A N/A
    Cluster Wait Time (ms) 0 N/A N/A
    Application Wait Time (ms) 0 N/A N/A
    Concurrency Wait Time (ms) 504,047 N/A N/A
    Invalidations 0 N/A N/A
    Version Count 21 N/A N/A
    Sharable Mem(KB) 745 N/A N/A
    Execution Plan
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | UPDATE STATEMENT | | | | 36029 (100)| | | |
    | 1 | UPDATE | EXTR_CMG_TRANSACTION_HISTORY | | | | | | |
    | 2 | PARTITION RANGE SINGLE| | 305K| 21M| 36029 (1)| 00:05:16 | 62 | 62 |
    | 3 | TABLE ACCESS FULL | EXTR_CMG_TRANSACTION_HISTORY | 305K| 21M| 36029 (1)| 00:05:16 | 62 | 62 |
    Full SQL Text
    SQL ID SQL Text
    0np3ccxhf9jm UPDATE ESRODS.EXTR_CMG_TRANSACTION_HISTORY SET RULE_ID_2 = '9285'
    ', REPORT_CODE = 'MMKT' WHERE EDIT_CUSIP_NUM = '19766G868' AND PR
    OCESS_DATE BETWEEN '01-JAN-2010' AND '31-JAN-2010' AND RULE_ID_2
    IS NULL
    Table def:
    CREATE TABLE EXTR_CMG_TRANSACTION_HISTORY
    TRANSACTION_ID NUMBER(15) NOT NULL,
    CREATE_DATE DATE,
    CREATE_USER VARCHAR2(80 BYTE),
    MODIFY_DATE DATE,
    MODIFY_USER VARCHAR2(80 BYTE),
    EXCEPTION_FLG CHAR(1 BYTE),
    SOURCE_SYSTEM VARCHAR2(20 BYTE),
    SOURCE_TYPE VARCHAR2(32 BYTE),
    TRANSACTION_STATUS VARCHAR2(8 BYTE),
    FUND_ID NUMBER(15),
    FUND_UNIT_ID NUMBER(15),
    FROM_FUND_ID NUMBER(15),
    FROM_FUND_UNIT_ID NUMBER(15),
    EXECUTING_DEALER_ID NUMBER(15),
    EXECUTING_BRANCH_ID NUMBER(15),
    CLEARING_DEALER_ID NUMBER(15),
    CLEARING_BRANCH_ID NUMBER(15),
    BRANCH_PERSON_MAP_ID NUMBER(15),
    BP_REP_MAP_ID NUMBER(15),
    REP_ID NUMBER(15),
    PERSON_ID NUMBER(15),
    TPA_DEALER_ID NUMBER(15),
    TRUST_DEALER_ID NUMBER(15),
    TRANS_CODE_ID NUMBER(15),
    EDIT_DEALER_NUM VARCHAR2(30 BYTE),
    EDIT_BRANCH_NUM VARCHAR2(50 BYTE),
    EDIT_REP_NUM VARCHAR2(100 BYTE),
    EDIT_CUSIP_NUM VARCHAR2(9 BYTE),
    TRANS_TYPE VARCHAR2(80 BYTE),
    TRANSACTION_CD VARCHAR2(8 BYTE),
    TRANSACTION_SUFFIX VARCHAR2(8 BYTE),
    SHARE_BALANCE_IND VARCHAR2(2 BYTE),
    PROCESS_DATE DATE,
    BATCH_DATE DATE,
    SUPER_SHEET_DATE DATE,
    CONFIRM_DATE DATE,
    TRADE_DATE DATE,
    SETTLE_DATE DATE,
    PAYMENT_DATE DATE,
    AM_PM_CD VARCHAR2(2 BYTE),
    TRUST_DEALER_NUM VARCHAR2(7 BYTE),
    TPA_DEALER_NUM VARCHAR2(7 BYTE),
    TRUST_COMPANY_NUM VARCHAR2(10 BYTE),
    DEALER_NUM VARCHAR2(25 BYTE),
    BRANCH_NUM VARCHAR2(50 BYTE),
    REP_NUM VARCHAR2(100 BYTE),
    DEALER_NAME VARCHAR2(80 BYTE),
    REP_NAME VARCHAR2(80 BYTE),
    SOCIAL_SECURITY_NUMBER VARCHAR2(9 BYTE),
    ACCT_NUMBER_CD VARCHAR2(6 BYTE),
    ACCT_NUMBER VARCHAR2(20 BYTE),
    ACCT_SHORT_NAME VARCHAR2(80 BYTE),
    FROM_TO_ACCT_NUM VARCHAR2(20 BYTE),
    EXTERNAL_ACCT_NUM VARCHAR2(14 BYTE),
    NAV_ACCT VARCHAR2(1 BYTE),
    MANAGEMENT_CD VARCHAR2(16 BYTE),
    PRODUCT VARCHAR2(80 BYTE),
    SUBSET_PRODUCT VARCHAR2(3 BYTE),
    FUND_NAME VARCHAR2(80 BYTE),
    FUND_NUM VARCHAR2(7 BYTE),
    FUND_CUSIP_NUM VARCHAR2(9 BYTE),
    TICKER_SYMBOL VARCHAR2(10 BYTE),
    APL_FUND_TYPE VARCHAR2(10 BYTE),
    LOAD_INDICATOR VARCHAR2(50 BYTE),
    FROM_TO_FUND_NUM VARCHAR2(7 BYTE),
    FROM_TO_FUND_CUSIP_NUM VARCHAR2(9 BYTE),
    CUM_DISCNT_NUM VARCHAR2(9 BYTE),
    NSCC_CONTROL_CD VARCHAR2(15 BYTE),
    NSCC_NAV_REASON_CD VARCHAR2(1 BYTE),
    BATCH_NUMBER VARCHAR2(20 BYTE),
    ORDER_NUMBER VARCHAR2(16 BYTE),
    CONFIRM_NUMBER VARCHAR2(9 BYTE),
    AS_OF_REASON_CODE VARCHAR2(3 BYTE),
    SOCIAL_CODE VARCHAR2(3 BYTE),
    NETWORK_MATRIX_LEVEL VARCHAR2(1 BYTE),
    SHARE_PRICE NUMBER(15,4),
    GROSS_AMOUNT NUMBER(15,2),
    GROSS_SHARES NUMBER(15,4),
    NET_AMOUNT NUMBER(15,2),
    NET_SHARES NUMBER(15,4),
    DEALER_COMMISSION_CODE CHAR(1 BYTE),
    DEALER_COMMISSION_AMOUNT NUMBER(15,2),
    UNDRWRT_COMMISSION_CODE CHAR(1 BYTE),
    UNDRWRT_COMMISSION_AMOUNT NUMBER(15,2),
    DISCO-stupid spam filter- UNT_CATEGORY VARCHAR2(2 BYTE),
    LOI_NUMBER VARCHAR2(9 BYTE),
    RULE_ID_1 NUMBER(15),
    RULE_ID_2 NUMBER(15),
    OMNIBUS_FLG CHAR(1 BYTE),
    MFA_FLG CHAR(1 BYTE),
    REPORT_CODE VARCHAR2(80 BYTE),
    TERRITORY_ADDR_CODE VARCHAR2(3 BYTE),
    ADDRESS_ID NUMBER(15),
    POSTAL_CODE_ID NUMBER(15),
    CITY VARCHAR2(50 BYTE),
    STATE_PROVINCE_CODE VARCHAR2(5 BYTE),
    POSTAL_CODE VARCHAR2(12 BYTE),
    COUNTRY_CODE VARCHAR2(5 BYTE),
    LOB_ID NUMBER(15),
    CHANNEL_ID NUMBER(15),
    REGION_ID NUMBER(15),
    TERRITORY_ID NUMBER(15),
    EXCEPTION_NOTES VARCHAR2(4000 BYTE),
    SOURCE_RECORD_ID NUMBER(15),
    LOAD_ID NUMBER(15),
    BIN VARCHAR2(20),
    SHARE_CLASS VARCHAR2(50),
    ACCT_PROD_ID NUMBER,
    ORIGINAL_FUND_NUM VARCHAR2(7),
    ORIGINAL_FROM_TO_FUND_NUM VARCHAR2(7),
    ACCT_PROD_REGISTRATION_ID NUMBER,
    REGISTRATION_LINE_1 VARCHAR2(60),
    REGISTRATION_LINE_2 VARCHAR2(60),
    REGISTRATION_LINE_3 VARCHAR2(60),
    REGISTRATION_LINE_4 VARCHAR2(60),
    REGISTRATION_LINE_5 VARCHAR2(60),
    REGISTRATION_LINE_6 VARCHAR2(35),
    REGISTRATION_LINE_7 VARCHAR2(35),
    SECONDARY_LOB_ID NUMBER(15,0),
    SECONDARY_CHANNEL_ID NUMBER(15,0),
    SECONDARY_REGION_ID NUMBER(15,0),
    SECONDARY_TERRITORY_ID NUMBER(15,0),
    ACCOUNT_OVERRIDE_PRIORITY_CODE NUMBER(3,0)
    TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
    PCTUSED 0
    PCTFREE 25
    INITRANS 1
    MAXTRANS 255
    NOLOGGING
    PARTITION BY RANGE (PROCESS_DATE)
    PARTITION P_ESRODS_EXTR_CMG_TRAN_PRE2005 VALUES LESS THAN (TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    COMPRESS
    TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
    PCTFREE 25
    INITRANS 100
    MAXTRANS 255
    STORAGE (
    INITIAL 5M
    NEXT 5M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION P_ESRODS_EXTR_CMG_TRAN_201105 VALUES LESS THAN (TO_DATE(' 2011-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    COMPRESS
    TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
    PCTFREE 25
    INITRANS 100
    MAXTRANS 255
    STORAGE (
    INITIAL 5M
    NEXT 5M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    PARTITION P_ESRODS_EXTR_CMG_TRAN_201106 VALUES LESS THAN (TO_DATE(' 2011-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    COMPRESS
    TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
    PCTFREE 25
    INITRANS 100
    MAXTRANS 255
    STORAGE (
    INITIAL 5M
    NEXT 5M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOCACHE
    NOPARALLEL;
    ALTER TABLE EXTR_CMG_TRANSACTION_HISTORY ADD (
    CONSTRAINT PK_EXTR_CMG_TRANSACTION_HIST PRIMARY KEY (TRANSACTION_ID)
    USING INDEX
    TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_IDX
    PCTFREE 25
    INITRANS 2
    MAXTRANS 255
    STORAGE (
    INITIAL 5M
    NEXT 5M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    Edited by: 855802 on May 1, 2011 6:46 AM

    855802 wrote:
    You cannot bypass redo logging on update statement, there are only a handfull of operations that you can skip redo logging. The table is created no-logging. Still, update of the this many rows should not be affected by the redo writes. I agree that would be a way to speed it up.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses005.htm#i999782
    NOLOGGING is supported in only a subset of the locations that support LOGGING. Only the following operations support the NOLOGGING mode:
    DML:  
    Direct-path INSERT (serial or parallel) resulting either from an INSERT or a MERGE statement. NOLOGGING is not applicable to any UPDATE operations resulting from the MERGE statement.
    Direct Loader (SQL*Loader)
    DDL:  
    CREATE TABLE ... AS SELECT
    CREATE TABLE ... LOB_storage_clause ... LOB_parameters ... NOCACHE | CACHE READS
    ALTER TABLE ... LOB_storage_clause ... LOB_parameters ... NOCACHE | CACHE READS (to specify logging of newly created LOB columns)
    ALTER TABLE ... modify_LOB_storage_clause ... modify_LOB_parameters ... NOCACHE | CACHE READS (to change logging of existing LOB columns)
    ALTER TABLE ... MOVE
    ALTER TABLE ... (all partition operations that involve data movement)
    ALTER TABLE ... ADD PARTITION (hash partition only)
    ALTER TABLE ... MERGE PARTITIONS
    ALTER TABLE ... SPLIT PARTITION
    ALTER TABLE ... MOVE PARTITION
    ALTER TABLE ... MODIFY PARTITION ... ADD SUBPARTITION
    ALTER TABLE ... MODIFY PARTITION ... COALESCE SUBPARTITION
    CREATE INDEX
    ALTER INDEX ... REBUILD
    ALTER INDEX ... REBUILD [SUB]PARTITION
    ALTER INDEX ... SPLIT PARTITIONYes, I was thinking in consideration with my previous post using create table as select ...... But, if it's in application side then you need to think about another solution...

  • Partitioned IOT of Object Type - mapping table not allowed for bitmap index

    Hi,
    looks like a feature available for standard Partitioned IOTs is not supported for object based tables, namely the MAPPING TABLE construct to support secondary local bitmap indexes.
    Can you confirm behaviour is as expected/documented?
    If so, is a fix/enhancement to support mapping table for object-based Partitioned IOTs in the pipeline?
    Results for partition-wise load using pipelined table function are very good, look-ups across tens of millions of rows are excellent.
    Environment = Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    OS = Oracle Enterprise Linux Server release 5.2 (Carthage) 2.6.18 92.el5 (32-bit)
    Here's the potted test-case...
    1) First the non object based Partitioned IOT - data is range-partitioned across the alphabet
    CREATE TABLE IOT_Table (
    textData VARCHAR2(10),
    numberData NUMBER(10,0),
    CONSTRAINT IOT_Table_PK PRIMARY KEY(textData))
    ORGANIZATION INDEX MAPPING TABLE PCTFREE 0 TABLESPACE Firewire
    PARTITION BY RANGE (textData)
    (PARTITION Text_Part_A VALUES LESS THAN ('B') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_B VALUES LESS THAN ('C') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_C VALUES LESS THAN ('D') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_D VALUES LESS THAN ('E') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_E VALUES LESS THAN ('F') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_F VALUES LESS THAN ('G') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_G VALUES LESS THAN ('H') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_H VALUES LESS THAN ('I') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_I VALUES LESS THAN ('J') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_J VALUES LESS THAN ('K') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_K VALUES LESS THAN ('L') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_L VALUES LESS THAN ('M') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_M VALUES LESS THAN ('N') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_N VALUES LESS THAN ('O') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_O VALUES LESS THAN ('P') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_P VALUES LESS THAN ('Q') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Q VALUES LESS THAN ('R') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_R VALUES LESS THAN ('S') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_S VALUES LESS THAN ('T') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_T VALUES LESS THAN ('U') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_U VALUES LESS THAN ('V') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_V VALUES LESS THAN ('W') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_W VALUES LESS THAN ('X') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_X VALUES LESS THAN ('Y') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Y VALUES LESS THAN ('Z') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Z VALUES LESS THAN (MAXVALUE) PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0))
    NOLOGGING PARALLEL -- FLASHBACK ARCHIVE IOT_Flashback_Data
    SQL> table IOT_TABLE created.
    2) Create the local secondary bitmap index utilising the underlying mapping table
    CREATE BITMAP INDEX IOT_Table_BMI1 ON IOT_Table (numberData)
    LOCAL STORAGE (INITIAL 1M PCTINCREASE 0 NEXT 512K) NOLOGGING PARALLEL;
    SQL> bitmap index IOT_TABLE_BMI1 created.
    3) Quick test to confirm all ok
    SQL> INSERT INTO IOT_Table VALUES ('ABC123',100);
    SQL> 1 rows inserted.
    SQL> SELECT * FROM IOT_Table;
    TEXTDATA NUMBERDATA
    ABC123     100
    4) Now create a minimal object type to use as the template for object table
    CREATE TYPE IOT_type AS OBJECT
    textData VARCHAR2(10 CHAR),
    numberData NUMBER(10,0)
    ) FINAL
    SQL> TYPE IOT_type compiled
    5) Attempt to create an object-based range partitioned IOT, including MAPPING TABLE clause as per step (1)
    CREATE TABLE IOTObj_Table OF IOT_type (textData PRIMARY KEY)
    OBJECT IDENTIFIER IS PRIMARY KEY ORGANIZATION INDEX
    MAPPING TABLE -- we'd like to use this feature to enable use of Bitmap Indexes...
    PCTFREE 0 TABLESPACE Firewire
    PARTITION BY RANGE (textData)
    (PARTITION Text_Part_A VALUES LESS THAN ('B') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_B VALUES LESS THAN ('C') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_C VALUES LESS THAN ('D') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_D VALUES LESS THAN ('E') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_E VALUES LESS THAN ('F') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_F VALUES LESS THAN ('G') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_G VALUES LESS THAN ('H') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_H VALUES LESS THAN ('I') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_I VALUES LESS THAN ('J') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_J VALUES LESS THAN ('K') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_K VALUES LESS THAN ('L') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_L VALUES LESS THAN ('M') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_M VALUES LESS THAN ('N') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_N VALUES LESS THAN ('O') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_O VALUES LESS THAN ('P') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_P VALUES LESS THAN ('Q') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Q VALUES LESS THAN ('R') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_R VALUES LESS THAN ('S') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_S VALUES LESS THAN ('T') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_T VALUES LESS THAN ('U') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_U VALUES LESS THAN ('V') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_V VALUES LESS THAN ('W') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_W VALUES LESS THAN ('X') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_X VALUES LESS THAN ('Y') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Y VALUES LESS THAN ('Z') PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0),
    PARTITION Text_Part_Z VALUES LESS THAN (MAXVALUE) PCTFREE 0 TABLESPACE Firewire Storage (Initial 10M Next 1M PCTIncrease 0))
    NOLOGGING PARALLEL -- FLASHBACK ARCHIVE IOT_Flashback_Data
    This errors out with the following...
    SQL Error: ORA-25182: feature not currently available for index-organized tables
    25182. 00000 - "feature not currently available for index-organized tables"
    *Cause:    An attempt was made to use one or more of the following feature(s) not
    currently supported for index-organized tables:
    CREATE TABLE with LOB/BFILE/VARRAY columns,
    partitioning/PARALLEL/CREATE TABLE AS SELECT options,
    ALTER TABLE with ADD/MODIFY column options, CREATE INDEX
    *Action:   Do not use the disallowed feature(s) in this release.
    6) Re-running the create table statement in step 5 without the MAPPING TABLE clause works fine. Not surprisingly an attempt to create a secondary local bitmap index on this table fails as there's no mapping table, like so...
    CREATE BITMAP INDEX IOTObj_Table_BMI1 ON IOTObj_Table (numberData)
    LOCAL STORAGE (INITIAL 1M PCTINCREASE 0 NEXT 512K) NOLOGGING PARALLEL;
    CREATE TABLE with LOB/BFILE/VARRAY columns,
    partitioning/PARALLEL/CREATE TABLE AS SELECT options,
    ALTER TABLE with ADD/MODIFY column options, CREATE INDEX
    *Action:   Do not use the disallowed feature(s) in this release.
    CREATE BITMAP INDEX IOTObj_Table_BMI1 ON IOTObj_Table (numberData)
    LOCAL STORAGE (INITIAL 1M PCTINCREASE 0 NEXT 512K) NOLOGGING PARALLEL
    Error at Command Line:99 Column:13
    Error report:
    SQL Error: ORA-00903: invalid table name
    00903. 00000 - "invalid table name"
    7) Creating a secondary local b-tree index is fine, like so...
    SQL> CREATE INDEX IOTObj_Table_I1 ON IOTObj_Table (numberData)
    LOCAL STORAGE (INITIAL 1M PCTINCREASE 0 NEXT 512K) NOLOGGING PARALLEL;
    index IOTOBJ_TABLE_I1 created.
    8) A quick test to ensure object table ok...
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('DEF456',500));
    SQL> 1 rows inserted.
    SQL> SELECT * FROM IOTObj_Table;
    TEXTDATA NUMBERDATA
    DEF456     500

    Thanks Dan,
    the intention is to range partition based on the initial character, so A* -> Text_Part_A, B* -> Text_Part_B, and so on.
    Here's an example, using an empty IOTObj_Table as created previously.
    1) Set up & confirm some test data (two 'D's, one 'N', and two 'Z's)
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('DEF456',500));
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('DDD111',510));
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('N3000',515));
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('ZZ1212',520));
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('Z111X',530));
    SQL> COMMIT;
    SQL> SELECT * FROM IOTObj_Table;
    TEXTDATA NUMBERDATA
    DDD111     510
    DEF456     500
    N3000     515
    Z111X     530
    ZZ1212     520
    2) Just to prove our IOT is enforcing the Primary Key based on the TextData attribute, try to insert a duplicate
    SQL> INSERT INTO IOTObj_Table VALUES (IOT_Type('Z111X',530));
    Error starting at line 141 in command:
    INSERT INTO IOTObj_Table VALUES (IOT_Type('Z111X',530))
    Error report:
    SQL Error: ORA-00001: unique constraint (OCDataSystems.SYS_IOT_TOP_84235) violated
    00001. 00000 - "unique constraint (%s.%s) violated"
    *Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
    For Trusted Oracle configured in DBMS MAC mode, you may see
    this message if a duplicate entry exists at a different level.
    *Action:   Either remove the unique restriction or do not insert the key.
    3) Now confirm that our data has been slotted into the range-based partition we expect using the PARTITION clause of SELECT...
    - The two 'D's...
    SQL> SELECT * FROM IOTObj_Table PARTITION (Text_Part_D);
    TEXTDATA NUMBERDATA
    DDD111     510
    DEF456     500
    - The single 'N'...
    SQL> SELECT * FROM IOTObj_Table PARTITION (Text_Part_N);
    TEXTDATA NUMBERDATA
    N3000     515
    - The two 'Z's...
    SQL> SELECT * FROM IOTObj_Table PARTITION (Text_Part_Z);
    TEXTDATA NUMBERDATA
    Z111X     530
    ZZ1212     520
    4) And to wrap up confirm an empty partition
    SELECT * FROM IOTObj_Table PARTITION (Text_Part_W);

  • Pacproxy (or something that vaguely resembles an apt-proxy clone)

    this may not be the right forum for this, but here is a little python app to proxy packages from a mirror, and (eventually) to automatically create repos from available packages in the local ABS tree.  i didnt like the suggested solution of network mounting /var/cache/pacman/pkg/, and i wanted my ABS built packages to autoupdate without running a repo-add manually all the time... and i detest cron jobs.
    as started, this started as a project to automatically create repos from any available binary packages existing within the ABS tree; i havent quite finished that yet.  i am using the proxy part for 4 arch machines in my home and it seems to be doing pretty good.  when the ABS stuff is done it will behave like this:
    /var/abs/<repo_name>/..../..../{pkg/,src/}
    where any packages in directory <repo_name> will be advertised as being a part of a repo with the same name (the <repo_name>.db.tar.gz file will be dynamically created and cached, this is the part im not done with).  it wont matter how deep the pkg file is, and the architecture will be automatically accounted for by reading the .PKGINFO file.
    right now though, proxying from ONE mirror (will probably add support for a mirror list like in pacman.d, but im not sure how this is handled exactly, any info on that would be great) seems to work pretty good, and it will proxy both architectures.  as is, it will store pakages and a small "cache" file in .pacproxy/<repo_name>/<arch>/.  the cache file has the same name as the package, and simply holds the Etag or Last-Modified header from when the package was pulled from the mirror.  everytime a file is requested, a HEAD request is sent to the mirror using that information, and if a 304 (not-modified) is returned, the cached copy is used, else the new copy is pulled and the cache file updated.  looks something like this:
    [cr@extOFme-d0 ~]$ tree .pacproxy
    .pacproxy
    |-- community
    | |-- i686
    | | |-- community.db.tar.gz
    | | `-- community.db.tar.gz.cache
    | `-- x86_64
    | |-- community.db.tar.gz
    | `-- community.db.tar.gz.cache
    |-- core
    | |-- i686
    | | |-- core.db.tar.gz
    | | |-- core.db.tar.gz.cache
    | | |-- coreutils-8.2-1-i686.pkg.tar.gz
    | | |-- coreutils-8.2-1-i686.pkg.tar.gz.cache
    | | |-- filesystem-2009.11-1-any.pkg.tar.gz
    | | |-- filesystem-2009.11-1-any.pkg.tar.gz.cache
    | | |-- glib2-2.22.3-1-i686.pkg.tar.gz
    | | `-- glib2-2.22.3-1-i686.pkg.tar.gz.cache
    | `-- x86_64
    | |-- core.db.tar.gz
    | `-- core.db.tar.gz.cache
    `-- extra
    |-- i686
    | |-- boost-1.41.0-2-i686.pkg.tar.gz
    | |-- boost-1.41.0-2-i686.pkg.tar.gz.cache
    | |-- extra.db.tar.gz
    | |-- extra.db.tar.gz.cache
    | |-- xdg-utils-1.0.2.20091216-1-any.pkg.tar.gz
    | |-- xdg-utils-1.0.2.20091216-1-any.pkg.tar.gz.cache
    | |-- xf86-input-synaptics-1.2.1-1-i686.pkg.tar.gz
    | |-- xf86-input-synaptics-1.2.1-1-i686.pkg.tar.gz.cache
    | |-- xulrunner-1.9.1.6-1-i686.pkg.tar.gz
    | `-- xulrunner-1.9.1.6-1-i686.pkg.tar.gz.cache
    `-- x86_64
    |-- extra.db.tar.gz
    `-- extra.db.tar.gz.cache
    i am still relatively new to the python scene, and i know there are several optimizations and probably alot of refactoring that will happen before im satisfied with it, but i think it is useful enough at this point to release to everyone here.  any ideas are very welcome, and once i finally get my server back to the datacenter, ill host this (in git) on extof.me along with some other goodies TBA at a later date :).  see POSSIBLE CAVEATS and DEVELOPMENT for ideas as to where im going and some issues that are definately present right now.
    DEPENDENCIES
    $ pacman -S cherrypy
    HOW TO USE
    ...point pacman.conf to it (port 8080 by default)...
    Server = http://localhost:8080/archlinux/$repo/os/x86_64
    ...edit pacproxy.py and change "mirrors" to an appropriate one for you (use $arch variable!)...
    mirrors = {'mirrors.gigenet.com': '/archlinux/$repo/os/$arch'}
    $ python pacproxy.py
    POSSIBLE CAVEATS
    1) multiple requests from multiple machines at the same time will probably cause some problems right now, as the cache's state will be inconsistent.  i *think* concurrent tools like powerpill will still work correctly from the same machine, since its not pulling the same packages twice, and cherrypy will handle the threading.
    2) im pretty sure the caching stuff is working correctly, but im not sure if pacman is realizing that its real copy (specifically the db.tar.gz files) is up to date.  i may need to send some additional headers
    3) if its not obvious, this only proxies http requests, not ftp
    4) there is no cache cleaning in .pacproxy, as files become out of date, they will just stay there and take up space.  not a huge deal, im just not sure how to address this, maybe remove them after X days of not being accessed
    5) there is some security issues with eval'ing the .cache file (its just a dict), should maybe do that differently
    6) im sure there are many other problems and security flaws, ill list/remove them as the show up/are fixed
    DEVELOPMENT
    1) anyone looking to mess with this (please do!), you can use cherrypy.log(msg) to send stuff to the log file (stdout unless you've changed it)
    2) fix some concurrency issues by stalling one thread's download of a pkg until another thread has finished writing the pkg to the cache (hopefully lockfiles+timeouts will take care of this)
    3) finish the ABS autobuilder, and maybe look into pulling in packages from another machines ABS tree using ssh/paramiko or similar (that would be cool)
    4) make the code cleaner and avoid some of the duplication, move to multiple files (cherrypy supports auotmatically monitoring modules for changes then reloading them) otherwise changes to the main file causes a reload of the entire server which could interrupt downloads.  plus right now its pretty much one huge function
    5) python/cherrypy isnt the most efficient way to deliver a large file, maybe there is a way to use python for the logic and a better server to actually read out the file to the client?
    6) probably dont need to ping the server for updates to pkg files... the name of the file *should* change when the file's updated.  this was mainly for ABS derived packages
    7) shower me with ideas!
    and now some code!
    PACPROXY.PY
    import os, fnmatch, httplib, cherrypy
    # we use this to know what to skip in local_dbs
    abs_standard = ['core','extra','community','community-testing']
    # what dbs should we proxy, and which are locally derived from custom ABS directories
    # /var/abs/[db_name]/ are local dbs
    proxy_dbs = abs_standard
    local_dbs = [p for p in os.listdir('/var/abs') if os.path.isdir('/var/abs/' + p) and p not in abs_standard]
    mirrors = {'mirrors.gigenet.com': '/archlinux/$repo/os/$arch'}
    valid_arch = ['i686', 'x86_64', 'any']
    # we'll put stuff here
    cache_root = os.getenv('HOME') + '/.pacproxy'
    def locate_pkgs(pattern):
    top_exclude_dirs = ['core','extra','community','community-testing']
    rel_exclude_dirs = ['pkg','src']
    for path, dirs, files in os.walk('/var/abs'):
    # no reason to look thru folders provided by ABS
    if path=='/var/abs':
    for d in [dir for dir in dirs if dir in top_exclude_dirs]: dirs.remove(d)
    # or folders created by makepkg
    for d in [dir for dir in dirs if dir in rel_exclude_dirs]: dirs.remove(d)
    for filename in fnmatch.filter(files, pattern):
    yield os.path.join(path, filename)
    def gen_proxy(remote_fd, local_fd=None):
    def read_chunks(fd, chunk=1024):
    while True:
    bytes = fd.read(chunk)
    if not bytes: break
    yield bytes
    for bytes in read_chunks(remote_fd):
    if local_fd is not None:
    local_fd.write(bytes)
    yield bytes
    def serve_repository(repo, future, arch, target):
    # couple sanity checks
    if arch not in valid_arch or repo not in proxy_dbs + local_dbs:
    raise cherrypy.HTTPError(404)
    is_db = fnmatch.fnmatch(target, repo + '.db.tar.gz')
    is_pkg = fnmatch.fnmatch(target, '*.pkg.tar.gz')
    is_proxy = repo in proxy_dbs
    is_local = repo in local_dbs
    if not any((is_db, is_pkg)) or not any((is_proxy, is_local)):
    raise cherrypy.HTTPError(404)
    active_mirror = mirrors.iterkeys().next()
    remote_target = '/'.join([mirrors[active_mirror].replace('$repo', repo).replace('$arch', arch), target])
    remote_file = 'http://' + '/'.join([active_mirror, remote_target])
    local_file = '/'.join([cache_root, repo, arch, target])
    cache_dir = os.path.dirname(local_file)
    if not os.path.exists(cache_dir): os.makedirs(cache_dir)
    # find out if there is a cached copy, and if its still good
    if is_proxy:
    if os.path.exists(local_file) and os.path.exists(local_file + '.cache'):
    cache = eval(open(local_file + '.cache').read())
    req = httplib.HTTPConnection(active_mirror)
    req.request('HEAD', remote_target, headers=cache)
    res = req.getresponse()
    if res.status==304:
    remote_fd = open(local_file, 'rb')
    local_fd = None
    elif res.status==200:
    map(os.unlink, [local_file, local_file + '.cache'])
    etag = res.getheader('etag')
    last_mod = res.getheader('last-modified')
    cache_dict = {}
    if etag is not None:
    # try etag first
    cache_dict['If-None-Match'] = etag
    elif last_mod is not None:
    cache_dict['If-Modified-Since'] = last_mod
    if len(cache_dict)>0:
    cache_fd = open(local_file + '.cache', 'wb')
    cache_fd.write(repr(cache_dict))
    cache_fd.close()
    req2 = httplib.HTTPConnection(active_mirror)
    req2.request('GET', remote_target)
    remote_fd = req2.getresponse()
    local_fd = open(local_file, 'wb')
    else:
    raise cherrypy.HTTPError(res.status)
    else:
    if os.path.exists(local_file): os.unlink(local_file)
    if os.path.exists(local_file + '.cache'): os.unlink(local_file + '.cache')
    req = httplib.HTTPConnection(active_mirror)
    req.request('GET', remote_target)
    remote_fd = req.getresponse()
    if remote_fd.status!=200:
    raise cherrypy.HTTPError(remote_fd.status)
    local_fd = open(local_file, 'wb')
    etag = remote_fd.getheader('etag')
    last_mod = remote_fd.getheader('last-modified')
    cache_dict = {}
    if etag is not None:
    # try etag first
    cache_dict['If-None-Match'] = etag
    elif last_mod is not None:
    cache_dict['If-Modified-Since'] = last_mod
    if len(cache_dict)>0:
    cache_fd = open(local_file + '.cache', 'wb')
    cache_fd.write(repr(cache_dict))
    cache_fd.close()
    cherrypy.response.headers['Content-Type'] = 'application/octet-stream'
    if repo in proxy_dbs:
    return gen_proxy(remote_fd, local_fd)
    if repo in local_dbs:
    pass
    # nothing seems valid? throw a 404
    raise cherrypy.HTTPError(404)
    serve_repository.exposed = True
    conf = {'server.socket_host': '0.0.0.0',
    'server.socket_port': 8080,
    'request.show_tracebacks': False}
    cherrypy.config.update(conf)
    cherrypy.quickstart(serve_repository,'/archlinux')
    Last edited by extofme (2009-12-20 02:57:41)

    well, i didnt get it to pass downloading the first to packages and then hanging (and pacproxy looping glibc package), even on subsequent restart (so that pacproxy would already have those cached), but i did found out an aif profile for this
    # aif -p automatic -c pacinst.aif
    SOURCE=net
    SYNC_URL=http://192.168.0.35:8080/archlinux/core/os/i686
    HARDWARECLOCK=localtime
    TIMEZONE=Europe/Berlin
    # Do you want to have additional pacman repositories or packages available at runtime (during installation)?
    # RUNTIME_REPOSITORIES = array like this ('name1' 'location of repo 1' ['name2' 'location of repo2',..])
    RUNTIME_REPOSITORIES=
    # space separated list
    RUNTIME_PACKAGES=
    # packages to install
    TARGET_GROUPS=base # all packages in this group will be installed (defaults to base if no group and no packages are specified)
    TARGET_PACKAGES_EXCLUDE= # Exclude these packages if they are member of one of the groups in TARGET_GROUPS. example: 'nano reiserfsprogs' (they are in base)
    TARGET_PACKAGES= # you can also specify separate packages to install (this is empty by default)
    # you can optionally also override some functions...
    #worker_intro () {
    #bug ? following gives: inform command not found
    #inform "Automatic procedure running the generic-install-on-sda example config. THIS WILL ERASE AND OVERWRITE YOUR /DEV/SDA. IF YOU DO NOT WANT THIS PRESS CTRL+C WITHIN 10 SECONDS"
    #sleep 10
    worker_configure_system () {
    prefill_configs
    sed -i 's/^HOSTNAME="myhost"/HOSTNAME="arch-generic-install"/' $var_TARGET_DIR/etc/rc.conf
    # These variables are mandatory
    GRUB_DEVICE=/dev/sda
    PARTITIONS='/dev/sda *:ext3'
    BLOCKDATA='/dev/sda1 raw no_label ext3;yes;/;target;no_opts;no_label;no_params'

Maybe you are looking for

  • Fully Close Off Under Delivered Schedule Agreement Line Items

    Hi There, I am supporting a customer who is having an issue with line items on a scheduled agreement, which she has closed off, appearing in MIGO. Example: I have a scheduled line item of 100 EA.  I GR 90 EA but I am happy with this so I want to clos

  • How do I retrieve my data from external hard drive?

    I have tried to search for an answer to this, but have yet to do so and really getting freaked out... I had to take my iBook to the Genius Bar today to have everything wiped out and reinstalled due to some issues I was having. I wasn't worried becaus

  • Version needed to view pdf portfolio?

    Does anyone know what version of Adobe Reader or Acrobat is needed to view a PDF Portfolio with full functionality?

  • Post document

    Hi all, im using FM 'BAPI_ACC_ACTIVITY_ALLOC_POST' to post a document. The document has been created but when i go to transaction KB23 to see it, it does not exist...any idea? Thanx

  • BI to XI

    Hi All, Can someone please tell me in which type of scenarios we go for pushing the data from BI to XI. And pulling the data to BW from XI. I searched the forums before posting regarding this still am not able to find out clearly when do we really go