Publish Date Items disappeared

This happens on Portal 9.0.4.99. Tried to publish a text item at 10:00AM, changed the Publish Date to 3:00PM, after hitting Submit, the item disappeared, even in admin (Edit) mode. Then the item showed up at 3:00PM. This only happens on one Page Group, not on another one.

hi geofferey,
there is a page group property - item tab - where you can set if you want to display unpublished items in edit mode.
regards,
christian

Similar Messages

  • How To View An Item When Publish Date Is Set Greater Than Current Date

    Using Oracle Portal Version: 10.1.2.0.2 (Build: 139).
    The customer(content manager) uploads a document into the portal on 28-JUL-2006 07:08 AM.
    The publish date is change to 1-AUG-2006 00:01 AM.
    There is a portal user group needing to review the document prior to the publish date.
    I created an oracle procedure to display the links to the unpublished items.
    Unfortunately the links generate an item can not be found error.
    When the publish date is met the links function properly.
    How can provide a method to allow this user group to view the unpublished document without granting them the manage content privilege on the page?
    Thanks

    For anyone else struggling with this... In my case this was not working due to a bug in XI R2, XI 3.0, and XI 3.1. QueryService modifyDataProvider (tracking ADAPT01113234) has been fixed with XI Release 2 FixPack 4.5 and I've tested that it did work for me. No such fix is available for XI 3.0 or XI 3.1 as of this post but I was told that it would get a fix in the next SP (fingers crossed)

  • Cursor disappears when Rt-justify numeric data items

    Folks,
    Why does my cursor disappear when Rt-justify numeric data items ?
    It does appear again when I actually start typing a dollar amount. Note that my item is a Number value in a database block and my format mask is $9,999,999.99 .
    Thank you,
    Bob

    Sorry, I copied in the incorrect link. Try this post: Field text Disappears And I found the other bugs I was thinking of: 379621 and 398900.

  • Contacts and calendar items disappear when exchange server is off

    Hi All
    I've been having this problem for a few weeks now and it's most frustrating. My iphone is setup to sync over the air to our corporate exchange server. It seems that when the server is down for repair/patching ALL my contacts and calendar items disappear from my iphone. This doesn't seem to happen if the phone is out of signal?
    I know that I can sync directly with Outlook, however this would then mean that the phone is only updated when synced with my laptop at work. If I sync both over the air AND with Outlook, I get duplicates of everything in my calendar.
    I'm not restoring my iphone to factory defaults to get rid of the duplicates and go back to over the air sync only.
    Does anyone have any ideas on how I can get calendar and contacts to work "offline" when the exchange server is down?
    Main PC is running Windows 7 and exchange is 2007. This problem is only recent, I think in the last month of two. Nothing has changed other than the last few firmware updates from Apple.

    hi, cheers for the info. That would make sense, however how come when I'm out of signal and not connected to wifi, the data doesn't also disappear as that too would render the server "unavailable".
    Is there a way to make the iphone keep a copy of the data much the same as "offline mode" in Outlook where you can still browse your email, calendar and contacts when the server is offline, and it picks up where it left off when the server is available again?
    I appreciate this may not be a "problem" as such, but is frustrating as all my contacts and calendar live in Outlook and I can't function without them
    Message was edited by: spongmonkey

  • File in File Browse item disappears if validation fails on any item ....

    Greetings:
    I'm using APEX 4.0. I have a region with 7 data elements, one of them being a File Browse page item. The BLOB file loads in the WWV_FLOW_FILES table first, then in the "After Submit" page processing, I move the BLOB into my own custom table. This works great in normal processing.
    However, if any of the other data elements in this region fails validation, the page renders with the validation messages, but the file in the File Browse page item disappears. Therefore, the user would have to re-select the file to upload before they resubmit and process the page again.
    How can I avoid this? Why does the path and file name disappear when page validation fails?
    Thanks,
    Stan

    bondurs wrote:
    Greetings:
    I'm using APEX 4.0. I have a region with 7 data elements, one of them being a File Browse page item. The BLOB file loads in the WWV_FLOW_FILES table first, then in the "After Submit" page processing, I move the BLOB into my own custom table. This works great in normal processing.
    However, if any of the other data elements in this region fails validation, the page renders with the validation messages, but the file in the File Browse page item disappears. Therefore, the user would have to re-select the file to upload before they resubmit and process the page again.
    How can I avoid this? Why does the path and file name disappear when page validation fails?It is a required security feature. Per the HTML specification, APEX will not render a value in a file browse item on page show. This protects the user from nefarious persons changing the file item value during spurious "failed" validation (hoping the user is distracted correcting the "failed" item and does not notice) in order to capture a file the user does not intend to submit (e.g. /etc/passwd).

  • IMac monitor - desktop items disappear/reappear - applications pause

    For about a week now, every 2 minutes all of my desktop icons will disappear and then reappear. If I have a folder open, such as the Pictures folder, it will close during this process.
    If I am using Firefox or Safari or Photoshop, when the desktop items disappear, so does my mouse's ability to function on the application. I have to click on the application again for the mouse to function on it.
    Today I was attempting to burn files to a DVD, but it fails because of this "refreshing" or whatever you call this symptom. Last night my computer did appear to do its weekly backup, (it said it was successful) but I wanted to copy files onto a DVD just in case.
    Can someone give me some direction on how to troubleshoot this? In beginner terms? I'd really appreciate it. Thank you.
    P.S. The only new things that happened recently that I can think of was that I did a software upgrade (think it was Quicktime...) and downloaded a Wordpress Theme.

    Sometimes, starting up in SafeBoot (shift key held after start tone, login window appears
    and semi-normal desktop Finder appears; but this is in a reduced function mode) and a
    'repair disk permissions' in Disk Utility of the computer's volume on the Mac HD may help.
    And after that, or any other minor repairs done in SafeBoot, the computer gets a normal
    restart so the rest of the OS will be up and running. If that does not help, there are other
    things one may try and do to help it. The matter involves some troubleshooting and a
    cause and effect approach is sometimes taken; or on other occasions, a user may try a
    long laundry-list of things and hope some issue just goes away; if one or another thing
    happens to work. In this instance, learning of a specific issue's cure may not result.
    Some items causing a problem, such as those cited above, may be dealt with while
    the computer is started up in SafeBoot mode; since extensions and other parts of
    such things may be turned off. So, some items like Stuffit can be removed when
    they are disabled in SafeBoot. Be sure to run 'repair disk permissions' before and
    after software updates, since for some reason this appears to help the system.
    Sometimes, just basic system maintenance such as repair disk permissions, running
    time/date cron-like routines (see AppleJack download, or OnyX in automation) can
    resolve issues that may otherwise involve knowing more about how the system works.
    These kinds of tools, free to download and try, can save a bit of trouble; some of that
    is preventive in that the routines can be done per schedule to keep the system well.
    The bootable installer system disc can be helpful, since there are optional tools in the
    Disk Utility on that disc's boot volume; ahead of actually running the Installer itself. A
    menu bar appears in the first full Installer window with drop down menus in it, and of
    these a few options appear. And in the Disk Utility, chosen from the Installer menu,
    there are a variety of tools which can totally obliterate or attempt a repair to the HDD.
    An 'archive & install' and then update the resulting new system to the latest version
    of OS X including security update, java, quicktime, safari, and other parts would follow;
    in some cases, perhaps extreme, this is one way to resolve system software issue.
    There are others, but this is among the last but not least. Further study is recommended
    before going the A&I plus update the new system folder route; depending on what else
    is going on in the system as it exists now. If this archive & install is performed, be sure
    the checkbox to save user account info is marked, so the new system will retain this...
    In any event...
    Good Luck & happy computing!

  • How to configure CSWP on Category page to show the Published Catalog-item page on Publishing site in a Cross Site Publishing scenario?

    I have created a Cross Site Publishing Environment in SharePoint Online. After connected
    to my catalog. 2 pages automatically created. But in "Category" page, if i click on an item it will bring me to the original path/item located in Authoring site. How to configure Content Search Web Part on Category page to show the Published Catalog-item
    page on Publishing site?
    Can we do this by changing the property mappings?

    Hi,
    According to my understanding, you want users to be redirected to pages in the current site instead of the source page of the search results in a Content Search Web
    Part.
    By default, the hyperlinks of the search results in a Content Search Web Part will point to the source page where the data comes from, when the hyperlink of each result
    is clicked, user will be redirected to the corresponding source page.
    If the data comes from other sites, what page do you want to display when user clicks a search result in the Content Search Web Part?
    Property Mappings can help to control the content of each part of a display template, however, there seems no such property in the search result can help to redirect
    to the pages of the current site, thus, it might not be able to meet your requirement.
    More information about customizing the Content Search Web Part:
    https://www.martinhatch.com/2013/02/customising-cbswp-part1.html
    Best regards,
    Patrick
    Patrick Liang
    TechNet Community Support

  • Publishing Content Items through API

    Hello,
    I am currently using EDK 5.3 for publishing content items. I have been sucessful in editing an existing content item using APIS. However It looks like APIS doesn't have a way to schedule to publish content items for a future date.
    I tried to do some reverse engineering and found out that this information is stored in pcsscheduledcontent and pcsscheduleitems tables. I was able to figure out how to change the dates as this are stored in pcsscheduleitems table,however can't find out where are the values are being stored for time.
    Say, I want to publish the content item for 08/28/2008 4:00 pm using the APIS, I am able to get to the date from the above 2 tables but really can't find out where the time is stored. I have tested from the User interface that the time is also retained based on the options we pick.
    And also I have seen that the Loadid keeps on increasing for every change I make.
    Can someone please help me/ guide me in the right direction?
    Thanks.

    pcsscheduledContent table. schedule something to be published in the future and then do a select on that table with an order by clause of :
    ORDER BY ITEMBEGINDATE DESC
    The itembegindate column contains the date AND time of the publishing schedule. you may just not be converting the value correctly.
    hth,
    Robert

  • Time Based publishing dates lost through copying content

    We are using a SAP Netweaver Portal, www.dkv-euroservice.com, and a standard KM. If we use the copy function through the document explorer on a content item which is in a KM directory we noticed that all "time bases publishing" dates belonging to this item are lost.
    These property information are emtpy in the original document which we wanted to be copied and also in the new copied item ?!
    The SAP UI Command problem for copy a content item occures by selecting one content item and by selecting serveral items at the same copy execution.
    Please inform us about a fix to this problem. As we mentioned this behaviour to SAP at the DSAG fair in Leipzig, Germany we got the answer that this is a software bug.
    Regards
    Shah Saad Azfar

    Copying content might not provide timebased publishing parameters, however, you can try it with "Move" Command.
    Move will copy all related parameters along with the file...
    Regards,
    Tushar Dave

  • Publish Data vi error code -34117

    I upgraded a VI from Labview 7.1 to 8.6 which includes Publish Data VI on a new computer
    I run the VI, no errors, since I want to read this point from the host computer using data socket I went to MAX to add the data point that it is published, MAX does not find any data points
     So I tried to test the publish VI in a simple test VI which includes only the publish VI but I am getting error code  -34117.
    I am using fieldpoint CFP-2220 and Labview 8.6
    Any suggestions why the publish data vi is not working even though it is not returning any errors why I am getting error -34117?.
    is there any difference on how the publish data vi and data socket works in labview 8.6 vs. labview 7.1
    Thanks.
    Solved!
    Go to Solution.

    Hi j_sd,
    Are you following the instructions laid out in the Publish Data VI Help article?
    Publish Data Details
    You must run any VI that contains the Publish Data VI at least once before adding the data items from the VI to Measurement & Automation Explorer (MAX).
    To add a data item from the Publish Data VI to MAX, complete the following steps.
    Make sure LabVIEW Real-Time (RT) is targeted to a FieldPoint RT controller. Refer to the LabVIEW Real-Time User Manual for more information about targeting LabVIEW RT to a controller.
    Place a Publish Data VI on the block diagram.
    Right-click the Publish Data VI. Select Select Type and select the data type for the new item you want to create.
    Double-click the Publish Data VI.
    On the Publish Data VI front panel, enter values in the block name and item name fields.
    Select an action from the action menu.
    Run the Publish Data VI.
    In MAX, right-click the FieldPoint RT controller under My System»Data Neighborhood and select Create New Item from the shortcut menu.
    In the Create New dialog box, select LabVIEW Item. Click the Continue button
    Select the item from the Address of this item field.
    In the Create New LabVIEW Item dialog box enter a name for the LabVIEW item in the Name field. Click the OK button.
    Save the .iak configuration file.
    The data published by this VI can be read or written to over a network by subscribing to the block and item names, using the DataSocket Read or DataSocket Write VIs. The value of an item may also be read or changed on the local machine where it was created by using this VI and passing in the same block and item name.
    Aaron P
    National Instruments
    Applications Engineer
    http://www.ni.com/support

  • Item disappeared from cache...

    I inserted 4 items in my cache. I have a JTable that displays them as they show up. I saw all four come in. That app's console logging confirmed it:
    2009-10-08 14:51:50.299/78.263 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT      47V31209        1       SIM     2009281 0
    2009-10-08 14:54:52.292/260.256 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        3       SIM     2009281 0
    2009-10-08 14:55:11.208/279.172 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        5       SIM     2009281 0
    2009-10-08 14:55:36.024/303.988 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        7       SIM     2009281 0I tried to list the items using coherence's provided console app:
    Map (orders): cache executions
    <distributed-scheme>
      <!--
      To use POF serialization for this partitioned service,
      uncomment the following section
      <serializer>
      <class-
      name>com.tangosol.io.pof.ConfigurablePofContext</class-
      name>
      </serializer>
      -->
      <scheme-name>example-distributed</scheme-name>
      <service-name>DistributedCache</service-name>
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>example-binary-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>
    Map (executions): list
    MSFT    47V31209        5       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:55:11 CDT 2009
    MSFT    47V31209        7       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:55:35 CDT 2009
    MSFT    47V31209        3       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:54:52 CDT 2009
    Iterator returned 3 items
    Map (executions): size
    4It says size is 4 but "Iterator returned 3 items". what??
    I refreshed my swing app's CQC and it confirmed that there are only 3 now:
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        5       SIM     2009281 0
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        3       SIM     2009281 0
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        7       SIM     2009281 0Apparently an item disappeared from the cache somehow, right? My app with the CQC updating the JTable didn't get any entryDeleted events.
    Thanks,
    Andrew

    All I do to reproduce it is start coherence, register a trigger with that offending line of code in it and add an Execution to the Executions cache. The Execution shows up, but is immediately gone as soon as that line of code in the trigger runs. No entryDeleted event comes over the CQC. I haven't tried doing a get on the missing object's key yet.
    This starts the application:
    setlocal
    set JAVA_HOME=c:\jdk
    set memory=256m
    set java_opts=%java_opts% -Xms%memory%
    set java_opts=%java_opts% -Xmx%memory%
    set java_opts=%java_opts% -server
    set java_opts=%java_opts% -Dtangosol.coherence.distributed.localstorage=false
    set java_opts=%java_opts% -Dtangosol.coherence.member=%username%
    set PATH=%JAVA_HOME%\bin
    set CP=.
    set CP=%CP%;%JAVA_HOME%\lib
    set CP=%CP%;S:\java\javaclasses\log4j-1.2.8.jar
    set CP=%CP%;S:\java\oms2\lib\oms2.jar
    set CP=%CP%;S:\java\mju\classes
    set CP=%CP%;s:\java\javaclasses\mysql-connector-java.jar
    set CP=%CP%;s:\java\coup\lib\coup.jar
    set CP=%CP%;s:\java\execution_viewer\lib\execution_viewer.jar
    set CP=%CP%;S:\java\stats\classes\
    set CP=%CP%;S:\java\javaclasses\quoteclient.jar
    set CP=%CP%;S:\java\javaclasses\commons-collections-3.2.1.jar
    set CP=%CP%;S:\java\javaclasses\commons-configuration-1.6.jar
    set CP=%CP%;S:\java\javaclasses\commons-lang-2.4.jar
    set CP=%CP%;S:\java\javaclasses\commons-logging-1.1.1.jar
    rem --- BORLAND LAYOUTS ---
    set CP=%CP%;s:\java\javaclasses\jbcl.jar
    rem --- BORLAND LAYOUTS ---
    rem --- COHERENCE ----
    set CP=%CP%;C:\coherence\lib\tangosol.jar
    set CP=%CP%;C:\coherence\lib\coherence.jar
    set CP=%CP%;C:\coherence\lib\coherence-messagingpattern-2.2.0.jar
    set CP=%CP%;C:\coherence\lib\coherence-work.jar
    rem --- COHERENCE ----
    IF EXIST %JAVA_HOME%\bin\java_for_execution_viewer.exe GOTO OK
    copy %JAVA_HOME%\bin\java.exe %JAVA_HOME%\bin\java_for_execution_viewer.exe
    :OK
    %JAVA_HOME%\bin\java_for_execution_viewer.exe -classpath %CP% %java_opts% execution_viewer.ExecutionViewer
    endlocalThis starts the coherence node
    set java_home=c:\jdk1.6.0_14_64bit
    :config
    @rem specify the Coherence installation directory
    set coherence_home=%~dp0\..
    @rem specify the JVM heap size
    set memory=512m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    :launch
    set java_opts=%java_opts% -Xms%memory%
    set java_opts=%java_opts% -Xmx%memory%
    set java_opts=%java_opts% -Dtangosol.coherence.management=all
    set java_opts=%java_opts% -Dtangosol.coherence.management.remote=true
    set java_opts=%java_opts% -Dtangosol.coherence.localhost=%my_ip%
    set java_opts=%java_opts% -Djava.net.preferIPv4Stack=true
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote.authenticate=false
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote.ssl=false
    set java_opts=%java_opts% -Dtangosol.coherence.cacheconfig=c:/coherence/cache-config-dev.xml
    set java_opts=%java_opts% -Dtangosol.pof.enabled=false
    set CP=%CP%;"%coherence_home%\lib\coherence.jar"
    set CP=%CP%;S:\java\javaclasses\log4j-1.2.8.jar
    set CP=%CP%;s:\java\coup\lib\coup.jar
    set CP=%CP%;S:\java\oms2\lib\oms2.jar
    set CP=%CP%;S:\java\stats\classes\
    echo %java_opts%
    REM "%java_exec%" -Dcom.sun.management.jmxremote.port=9991 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -server -showversion "%java_opts%" -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %1
    copy %java_home%\bin\java.exe %java_home%\bin\java_for_coherence.exe
    echo ****
    %java_home%/bin/java_for_coherence.exe -server -showversion %java_opts% -cp %CP% com.tangosol.net.DefaultCacheServer %1
    echo ****
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exitthis is c:/coherence/cache-config-dev.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
            <!-- ***********  SCHEME MAPPINGS  ***********  -->
            <caching-scheme-mapping>
                    <cache-mapping>
                            <cache-name>executions</cache-name>
                            <scheme-name>executions-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>stats.*</cache-name>
                            <scheme-name>stats-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>positions</cache-name>
                            <scheme-name>positions-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>oms</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>orders</cache-name>
                            <scheme-name>orders-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>coup.*</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>legacyExecs</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
            </caching-scheme-mapping>
            <!-- ******************************** -->
            <caching-schemes>
            <!-- <distributed-scheme> -->
                      <optimistic-scheme>
                            <scheme-name>stats-scheme</scheme-name>
                            <!-- <service-name>ReplicatedCache.Optimistic</service-name> -->
                            <service-name>ReplicatedCache.Optimistic</service-name>
                            <backing-map-scheme>
                                     <local-scheme/>
                                     <!-- <external-scheme>  -->
                                     <!-- <paged-external-scheme>  -->
                                     <!-- <overflow-scheme>  -->
                                     <!-- <class-scheme>  -->
                            </backing-map-scheme>
                            <autostart>true</autostart>
                    </optimistic-scheme>
            <!-- </distributed-scheme> -->
                    <distributed-scheme>
                            <scheme-name>executions-scheme</scheme-name>
                            <service-name>DistributedCache</service-name>
                            <!--
                            <serializer>
                            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                            </serializer>
                            -->
                            <!-- <listener>
                            <class-scheme>
                            <class-factory-name>oms.grid.ExecutionMapTrigger</class-factory-name>
                            <method-name>createTriggerListener</method-name>
                            </class-scheme>
                    </listener> -->
                    <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                    <scheme-name>ExecutionDatabaseScheme</scheme-name>
                                    <internal-cache-scheme>
                                            <local-scheme>
                                                    <!-- Any Memory Scheme Name Could Go Here, Right? -->
                                                    <scheme-name>SomeScheme1</scheme-name>
                                            </local-scheme>
                                    </internal-cache-scheme>
                                    <cachestore-scheme>
                                            <class-scheme>
                                                    <class-name>oms.grid.ExecutionCacheStore</class-name>
                                                    <class-factory-name>oms.grid.ExecutionCacheStore</class-factory-name>
                                                    <init-params>
                                                            <init-param>
                                                                    <param-name>url</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>jdbc:mysql://localhost:6033/oms2?autoReconnect=true</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>username</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>password</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                    </init-params>
                                            </class-scheme>
                                    </cachestore-scheme>
                                    <write-delay>30s</write-delay>
                                    <write-batch-factor>0.5</write-batch-factor>
                            </read-write-backing-map-scheme>
                            <!--
                            <local-scheme>
                            <scheme-ref>example-binary-backing-map</scheme-ref>
                            </local-scheme>
                            -->
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>positions-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                    <scheme-name>PositionDatabaseScheme</scheme-name>
                                    <internal-cache-scheme>
                                            <local-scheme>
                                                    <!-- Any Memory Scheme Name Could Go Here, Right? -->
                                                    <scheme-name>SomeScheme2</scheme-name>
                                            </local-scheme>
                                    </internal-cache-scheme>
                                    <cachestore-scheme>
                                            <class-scheme>
                                                    <class-name>oms.grid.PositionCacheStore</class-name>
                                                    <class-factory-name>oms.grid.PositionCacheStore</class-factory-name>
                                                    <!-- <method-name>PositionCacheStoreFactory</method-name> -->
                                                    <init-params>
                                                            <init-param>
                                                                    <param-name>url</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>jdbc:mysql://localhost:6033/oms2?autoReconnect=true</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>username</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>password</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                    </init-params>
                                            </class-scheme>
                                    </cachestore-scheme>
                                    <write-delay>30s</write-delay>
                                    <write-batch-factor>0.5</write-batch-factor>
                            </read-write-backing-map-scheme>
                            <!--
                            <local-scheme>
                            <scheme-ref>example-binary-backing-map</scheme-ref>
                            </local-scheme>
                            -->
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>orders-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <backing-map-scheme>
                            <local-scheme/>
                    </backing-map-scheme>
                    <listener>
                            <class-scheme>
                                    <class-factory-name>oms.grid.OrderAddTrigger</class-factory-name>
                                    <method-name>createTriggerListener</method-name>
                            </class-scheme>
                    </listener>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>default-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <backing-map-scheme>
                            <local-scheme/>
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
    </caching-schemes>
    </cache-config>
    this is coherence-cache-config.xml from the coherence.jar.
    <?xml version="1.0"?>
    <!--
    Note: This XML document is an example Coherence Cache Configuration deployment
    descriptor that should be customized (or replaced) for your particular caching
    requirements. The cache mappings and schemes declared in this descriptor are
    strictly for demonstration purposes and are not required.
    For detailed information on each of the elements that can be used in this
    descriptor please see the Coherence Cache Configuration deployment descriptor
    guide included in the Coherence distribution or the "Cache Configuration
    Elements" page on the Coherence Wiki (http://wiki.tangosol.com).
    -->
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>dist-*</cache-name>
          <scheme-name>example-distributed</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>8MB</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>near-*</cache-name>
          <scheme-name>example-near</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>8MB</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>repl-*</cache-name>
          <scheme-name>example-replicated</scheme-name>
        </cache-mapping>
        <cache-mapping>
          <cache-name>opt-*</cache-name>
          <scheme-name>example-optimistic</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>5000</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>local-*</cache-name>
          <scheme-name>example-object-backing-map</scheme-name>
        </cache-mapping>
        <cache-mapping>
          <cache-name>*</cache-name>
          <scheme-name>example-distributed</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <!--
        Distributed caching scheme.
        -->
        <distributed-scheme>
          <scheme-name>example-distributed</scheme-name>
          <service-name>DistributedCache</service-name>
          <!-- To use POF serialization for this partitioned service,
               uncomment the following section -->
          <!--
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          -->
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
        <!--
        Near caching (two-tier) scheme with size limited local cache
        in the front-tier and a distributed cache in the back-tier.
        -->
        <near-scheme>
          <scheme-name>example-near</scheme-name>
          <front-scheme>
            <local-scheme>
              <eviction-policy>HYBRID</eviction-policy>
              <high-units>100</high-units>
              <expiry-delay>1m</expiry-delay>
            </local-scheme>
          </front-scheme>
          <back-scheme>
            <distributed-scheme>
              <scheme-ref>example-distributed</scheme-ref>
            </distributed-scheme>
          </back-scheme>
          <invalidation-strategy>present</invalidation-strategy>
          <autostart>true</autostart>
        </near-scheme>
        <!--
        Replicated caching scheme.
        -->
        <replicated-scheme>
          <scheme-name>example-replicated</scheme-name>
          <service-name>ReplicatedCache</service-name>
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>unlimited-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </replicated-scheme>
        <!--
        Optimistic caching scheme.
        -->
        <optimistic-scheme>
          <scheme-name>example-optimistic</scheme-name>
          <service-name>OptimisticCache</service-name>
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>example-object-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </optimistic-scheme>
        <!--
         A scheme used by backing maps that may store data in object format and
         employ size limitation and/or expiry eviction policies.
        -->
        <local-scheme>
          <scheme-name>example-object-backing-map</scheme-name>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>{back-size-limit 0}</high-units>
    <!--      <expiry-delay>{back-expiry 1h}</expiry-delay> -->
          <flush-delay>1m</flush-delay>
          <cachestore-scheme></cachestore-scheme>
        </local-scheme>
        <!--
         A scheme used by backing maps that store data in internal (binary) format
         and employ size limitation and/or expiry eviction policies.
        -->
        <local-scheme>
          <scheme-name>example-binary-backing-map</scheme-name>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>{back-size-limit 0}</high-units>
          <unit-calculator>BINARY</unit-calculator>
    <!--      <expiry-delay>{back-expiry 1h}</expiry-delay> -->
          <flush-delay>1m</flush-delay>
          <cachestore-scheme></cachestore-scheme>
        </local-scheme>
        <!--
        Backing map scheme definition used by all the caches that do
        not require any eviction policies
        -->
        <local-scheme>
          <scheme-name>unlimited-backing-map</scheme-name>
        </local-scheme>
       <!--
        ReadWriteBackingMap caching scheme.
        -->
        <read-write-backing-map-scheme>
          <scheme-name>example-read-write</scheme-name>
          <internal-cache-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </internal-cache-scheme>
          <cachestore-scheme></cachestore-scheme>
          <read-only>true</read-only>
          <write-delay>0s</write-delay>
        </read-write-backing-map-scheme>
        <!--
        Overflow caching scheme with example eviction local cache
        in the front-tier and the example LH-based cache in the back-tier.
        -->
        <overflow-scheme>
          <scheme-name>example-overflow</scheme-name>
          <front-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </front-scheme>
          <back-scheme>
            <external-scheme>
              <scheme-ref>example-bdb</scheme-ref>
            </external-scheme>
          </back-scheme>
        </overflow-scheme>
        <!--
        External caching scheme using Berkley DB.
        -->
        <external-scheme>
          <scheme-name>example-bdb</scheme-name>
          <bdb-store-manager>
            <directory></directory>
          </bdb-store-manager>
          <high-units>0</high-units>
        </external-scheme>
        <!--
        External caching scheme using memory-mapped files.
        -->
        <external-scheme>
          <scheme-name>example-nio</scheme-name>
          <nio-file-manager>
            <initial-size>8MB</initial-size>
            <maximum-size>512MB</maximum-size>
            <directory></directory>
          </nio-file-manager>
          <high-units>0</high-units>
        </external-scheme>
        <!--
        Invocation Service scheme.
        -->
        <invocation-scheme>
          <scheme-name>example-invocation</scheme-name>
          <service-name>InvocationService</service-name>
          <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
        </invocation-scheme>
        <!--
        Proxy Service scheme that allows remote clients to connect to the
        cluster over TCP/IP.
        -->
        <proxy-scheme>
          <scheme-name>example-proxy</scheme-name>
          <service-name>TcpProxyService</service-name>
          <acceptor-config>
            <tcp-acceptor>
              <local-address>
                <address system-property="tangosol.coherence.extend.address">localhost</address>
                <port system-property="tangosol.coherence.extend.port">9099</port>
              </local-address>
            </tcp-acceptor>
          </acceptor-config>
          <autostart system-property="tangosol.coherence.extend.enabled">false</autostart>
        </proxy-scheme>
      </caching-schemes>
    </cache-config>Thanks,
    Andrew

  • Publish date and Current Version

    Hi - I've got auditing configured for my page group.
    I'd like to create a new version of my item which is to be published at a date in the future, replacing the existing version which will remain valid until then.
    Is this achievable using 'out of the box' features? It seems that if I set the new item to the current version, then nothing is displayed until the publish date. If I leave the existing item as current, then the new item is never displayed without manually switching the version to be current independent of publish date.
    Any ideas?
    thanks

    hi,
    the problem is that the publish date is only active for the current version of the item. so it is not possible to publish a version of an item at a given time because you need to make it the current version before the attribute publish date is active as well.
    the only workaround i can think of is using CM APIs:
    http://portalstudio.oracle.com/pls/ops/docs/FOLDER/COMMUNITY/PDK/plsql/doc/pldoc_9026/wwsbr_api.html
    there is a function called modify_item. you could run a job that modifies the items you want to modify at a given date and add them as a new version or update the existing version of the item.
    unfortunately, your usecase is currently not covered from within the Browser UI.
    regards,
    christian

  • BEA's Publisher - Display Published Content Items

    I'm using BEA's Publisher product with ALUI. I want to have a simple portlet that just displays Content Items that I've published, but after playing with it for a couple of hours, I find myself stuck.
    I have a Data Entry template that just takes a name and a file, and a presentation template associated with that. But I can't figure out how to make the presentation template display a list of published content items associated with the Data Entry template.
    I tried using the Tag Helper and I saw that the file properties were available, e.g. name, location, length, however I can't get the template to actually display the information.
    Also, since I can't publish said presentation template (you aren't allowed to publish a presentation template associated with a data entry template), I have to make ANOTHER presentation template, and include the first one, and then make a new portlet that displays that second template. Is this how it's supposed to work? It seems awfully complicated for such a simple task.
    I would greatly appreciate any input anyone can give. Thanks!
    Edited by: user10704201 on Dec 12, 2008 12:32 PM

    " I can't figure out how to make the presentation template display a list of published content items associated with the Data Entry template. "
    While I believe that this is a correct statement, you can't use pcs:foreach to get all items associated with a DET, I think that what you really want is to display all the content items in a folder or in a list. The 2 ways I most commonly do it are:
    use a list
    1) create a DET for the main portlet item (for example, news stories) with a property of type list
    2) create a DET for the type of item that goes into the list (for example, news article)
    3) create a Content Item based on your main portlet item DET
    4) add items to the list
    5) iterate through the list in your presentation template (pcs:foreach expr="item.myList" var="mylistvar"...)
    You can also tell each news article to automatically add itself to the news stories list. (Look in the DET for this setting)
    use a folder
    you can also just create your list items in a folder and then loop through the folder (pcs:foreach expr="folderByPath(item.folder,"folderName")"...

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Upload multiple files to a data item in sharepoint list

    The image above shows a list item with two pdf files attached to it.  This is an access databse that was pushed to this sharepoint list.  When we attached these files we used the "attach file" from the edit menu at the top of the page.
     They are put into a data item called "copy of sepration report", which I can't seem to find when I edit the list.  As a further on discussion of this we would like to be able to upload multiple items into their own data field.  I.E.
    one could be seperation report, and another could be accidents, and another would be disciplinary.  Each would have the capability of having multiple items uploaded to it.
    What am I missing????

    Since you can't attach document to list item field, you may need to think other way around. You can create a document library and have the document library all these fields (separation report, copy of separation report etc.). So instead of list item having
    the documents attached, the document library will have the fields attached. Also you can group the fields into two groups - fields that are not directly related to document and fields that are directly related to document. Then you can move the document related
    fields to document library and create another list with the non-related-to-document fields and linking this new list to document library using lookup
    Thanks,
    Sohel Rana
    http://ranaictiu-technicalblog.blogspot.com

Maybe you are looking for

  • For pAyment Terms Changes , where we can maintain the relevant Enteries

    Hi Gurus, My Query - For Payment Terms where we can Maintain The Relevant enteries other than Customer Master from FI point of view. pls guide me in this.. Thanks in Advance poonam

  • I want a hotkey for 'home'. Where is that hotkey for 'home'?

    See, so far I understand that there will be no customisable hotkey feature for Firefox. But couldn't you just add a hotkey for the 'home' button? Ctrl+Space for example. Then one could easily jump to e.g. his/her favourite search engine.

  • Meeting place Webex Node 8.0 Web interface

    Hi all, I just installed Meeting place Webex node , an i want what exactly the URL link to access to this server. Thanks in advance Hicham

  • External Drives will not mount

    Hi guys ... this is my first post in these forums, so sorry if it is in the wrong category. I have a Mac Pro, purchased in Feb this year. I have an iOmega Ultramax firewire drive, that is partitioned into four disks. I have had time machine running w

  • Disabling Document Discount Field

    Hi All I want to disable the document the discount field on the sales order document. How can I achieve this ? I don't want the users to be able to change the discount value in the document level. Thanks