Logging Behavior

I have noticed that output from running java class files, such as error messages, seem to be automatically cleared from the log window from time to time. Is there a way to keep this from happening?
Has this happened to anyone else?
I also notice that sometimes a new tab with text equal to the Project name will automatically appear in the log window. Could someone tell me what controls or determines this behavior?
Thanks,
Dave

Hi Dave,
About the output being cleared...
In the Project Properties dialog, on the Runner->Options panel, under "Before Running:" there is a checkbox labeled "Clear Log". If this is checked, the log window will be cleared each time the project is restarted.
About the new tab in the log window...
The first time you start running your project, a tab will be added to the log window for that process's output. The second time you start running your project, JDev will try to re-use the tab. However, if the first process is still alive, JDev will create another tab for the second process.
Also, if you run your project and then you debug your project, you will have one tab from running and a second tab for debugging.
-Liz

Similar Messages

  • Suggestion for default arch logging behavior

    Okay let me throw this out there and see what comes back...
    Every Linux distribution I have used (apart from arch) has a boot log file enabled by default, or provides a simple Yes/No flag to turn the feature on.
    Arch does not.  Moreover, I have searched the forums and posted a question or two myself about the topic.  No one seems to know the answer, and a lot of responses seem to indicate that, amongst the arch community, the possibility of a boot log file is treated like: a) something that no one in their right mind would want, or b) something that maybe, might be useful, but that no one can figure out how to implement.
    What is this?  What is going on?  Arch has a bootlogd binary in /sbin/.  Presumably there is no reason why it won't actually work, but no one seems to know where to put the call to the binary, or why they would put it there.
    This seems like an easy-to-add feature that a developer or moderator (anyone who really knows the system layout well) could make the default for future installs.  It just makes a text file that is a) not large, and b) can be very useful at times.
    So how about it?  Why not make this part of the default arch logging set up?  (and of course explain what you did that worked.)

    tomk:
    Sorry to disclose my ignorance, but I'm not sure how to do that.
    What I have been doing though is trying to see in more detail how it is done on a debian system which I have access to.  Here is what I've learned so far:
    There are not that many references to bootlogd on the system, so it might be possible (for me) to track down what is going on:
    root@wave32p:/etc# locate bootlogd
    /etc/default/bootlogd
    /etc/init.d/bootlogd
    /etc/init.d/stop-bootlogd
    /etc/init.d/stop-bootlogd-single
    /sbin/bootlogd
    /usr/share/man/man8/bootlogd.8.gz
    1) "/etc/default/bootlogd" must be edited (trivially) such that a "No" becomes a "Yes"  -- this seems like just a master switch.
    2) the man files are exactly the same on the two systems, and the output of "/sbin/bootlogd -v" is the same on the two systems, however the size of the bootlogd binary itself is not the same on both systems (larger on debian64 system).  not sure what to make of that, but it is not what I was hoping to see.
    3) the script "/etc/init.d/bootlogd" runs with (eg.) a "start/stop" flag, the same as most "functions" under arch that have scripts associated with them.
    4) it would seem that I have to grind my way through the above script if I'm going to make any progress.  I'm doing that in my spare time at the moment, though it's a challenge since it's been a few years since I've written bash scripts on a regular basis.  FYI, here is the /etc/init.d/bootlogd script verbatim (additional note:  the option -r (below) is supported on the arch version of the bootlogd binary, but the -c option does not seem to be...interesting?  Here is the man page entry for -c: "Attempt  to  write to the logfile even if it does not yet exist.  Without this option, bootlogd will wait for the logfile to appear before attempting to write to it.  This behavior prevents
                  bootlogd from creating logfiles under mount points."):
    #! /bin/sh
    ### BEGIN INIT INFO
    # Provides: bootlogd
    # Required-Start: mountdevsubfs
    # X-Start-Before: hostname keymap keyboard-setup procps pcmcia hwclock hwclockfirst hdparm hibernate-clean
    # Required-Stop:
    # Default-Start: S
    # Default-Stop:
    # Short-Description: Start or stop bootlogd.
    # Description: Starts or stops the bootlogd log program
    # which logs boot messages.
    ### END INIT INFO
    PATH=/sbin:/bin # No remote fs at start
    DAEMON=/sbin/bootlogd
    [ -x "$DAEMON" ] || exit 0
    NAME=bootlogd
    DESC="boot logger"
    BOOTLOGD_OPTS="-r -c"
    [ -r /etc/default/bootlogd ] && . /etc/default/bootlogd
    . /lib/init/vars.sh
    . /lib/lsb/init-functions
    # Because bootlogd is broken on some systems, we take the special measure
    # of requiring it to be enabled by setting an environment variable.
    case "$BOOTLOGD_ENABLE" in
    [Nn]*)
    exit 0
    esac
    # Previously this script was symlinked as "stop-bootlogd" which, when run
    # with the "start" argument, should stop bootlogd. Now stop-bootlogd is
    # a distinct script, but for backward compatibility this script continues
    # to implement the old behavior.
    SCRIPTNAME=${0##*/}
    SCRIPTNAME=${SCRIPTNAME#[SK]??}
    ACTION="$1"
    case "$0" in
    *stop-bootlog*)
    [ "$ACTION" = start ] && ACTION=stop
    esac
    case "$ACTION" in
    start)
    # PATH is set above
    log_daemon_msg "Starting $DESC" "$NAME"
    if [ -d /proc/1/. ]
    then
    umask 027
    start-stop-daemon --start --quiet --exec $DAEMON -- \
    $BOOTLOGD_OPTS
    ES=$?
    else
    $DAEMON $BOOTLOGD_OPTS
    ES=$?
    fi
    log_end_msg $ES
    stop)
    PATH=/bin:/sbin:/usr/bin:/usr/sbin
    log_daemon_msg "Stopping $DESC" "$NAME"
    start-stop-daemon --oknodo --stop --quiet --exec $DAEMON
    ES=$?
    sleep 1
    log_end_msg $ES
    if [ -f /var/log/boot ] && [ -f /var/log/boot~ ]
    then
    [ "$VERBOSE" = no ] || log_action_begin_msg "Moving boot log file"
    # bootlogd writes to boot, making backup at boot~
    cd /var/log && {
    chgrp adm boot || :
    savelog -q -p -c 5 boot \
    && mv boot.0 boot \
    && mv boot~ boot.0
    ES=$?
    [ "$VERBOSE" = no ] || log_action_end_msg $ES
    fi
    restart|force-reload)
    /etc/init.d/bootlogd stop
    /etc/init.d/bootlogd start
    status)
    status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
    echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload|status}" >&2
    exit 3
    esac

  • Bookmark, Logging behavior when Relication Agents stopped/started

    I have a few questions about bi-directional replication for 1 DSN on TT 7.0.5 with disk-based logging.
    (1) If the replication agents are stopped on both machines, will the log files stop being written? Or do they accumulate?
    (2) When the agents are started, do they continue replication from the original bookmarks?
    I'm trying to determine it the 2 machines become out-of-sync when agents are stopped, DSN continue to be updated, and then agents are started.
    Thanks,
    Linda

    I'll answer all the questions in one post if I may:
    +(1) If the replication agents are stopped on both machines, will the log files stop being written? Or do they accumulate?+
    Each replication flow (with bi-direactional there are two flows, A -> B and B -> A) has a state (start, stop, failed, pause) which is independent of whether or not the replication agent is running (the replication agent is what performs the actual capture, propagate, apply, acknowledge steps).
    Start - log bookmarks are maintained and if repagent is running replication flow occurs.
    Stop - no bookmarks are maintained (logs can be purged), no flow even if repagent is running.
    Failed - same as stop but only set automatically by system, typically if LOG THRESHOLD exceeded.
    Pause - bookmarks maintained, no replication flow occurs even if repagent is running
    The default state for a flow is 'start' but this can be manipulated via ttRepAdmin and may be set automatically if e.g. LOG THRESHOLD is exceeded. So, stopping the repagents on one or both machines does not change the state of the flow. Logs will accumulate until the repagents are restarted.
    +(2) When the agents are started, do they continue replication from the original bookmarks?+
    Yes, everything will carry on from the original bookmarks. Note that unless you have strict workload partitioning this scenario may exacerbate any conflict issues.
    I'm trying to determine it the 2 machines become out-of-sync when agents are stopped, DSN continue to be updated, and then agents are started.
    As long as you have strict workload partitioning to ensure no conflicts then there is no problem. Without workload partitioning then divergence is possible even under normal conditions and even if conflcit resolution is used.
    What is the best way to monitor that log space is approaching a disk full state?
    This really an O/S level issue. You can use a regularly executed script which parses the output of something like 'df -k' or you could write an application that uses the relevant O/S calls to monitor the space in the filesystem. This is easiest if the logs have a dedicated filesystem (which they should have anyway for performance reasons). From the Timesten side you can use ttBookmark to see if the 'hold' LSN is starting to fall further and further behind the 'current' LSN. This is an indication that replication is not able to keep up with the workload and that logs are therefore accumulating.
    Chris

  • Sharepoint log files growing huge

    Once again a SharePoint question :)
    I ran the following script against our SharePoint 2013 farm:
    #Specify the location of the CSV file here.
    $r = Import-Csv C:\folder\users.csv
    foreach($i in $r){
    #The following line displays the current user.
    Write-Host "The URL is:"$i.Url
    #Disables the "Minimal Download Strategy" feature under "Site Features".
    #Disable-SPFeature -Identity "MDSFeature" -Url $i.Url -force -confirm:$false
    #Enables the "SharePoint Server Publishing" feature under "Site Features".
    Enable-SPFeature -Identity "PublishingSite" -Url $i.Url -force -confirm:$false
    #Enables the "SharePoint Server Publishing Infrastructure" feature under "Site Collection Features".
    Enable-SPFeature -Identity "PublishingWeb" -Url $i.Url -force -confirm:$false
    The csv file that is being imported contains about 2000+ rows with users' MySite links where we want to en-/disable several features.
    The script does what it is supposed to do, but while running the script and reaching user No. ~15 the log files under "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\LOGS" starts to grow huuuuuuge (~5GB).
    They are mostly filled with this:
    09/17/2014 11:04:27.72 PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable Potentially excessive number of SPRequest objects (18) currently unreleased on thread 6. Ensure that this object or its parent (such as an SPWeb or SPSite) is being properly disposed. This object is holding on to a separate native heap.This object will not be automatically disposed. Allocation Id for this object: {D5F7BC80-8C88-4E17-9985-782F9724F2B9} Stack trace of current allocation: at Microsoft.SharePoint.SPGlobal.CreateSPRequestAndSetIdentity(SPSite site, String name, Boolean bNotGlobalAdminCode, String strUrl, Boolean bNotAddToContext, Byte[] UserToken, SPAppPrincipalToken appPrincipalToken, String userName, Boolean bIgnoreTokenTimeout, Boolean bAsAnonymous) at Microsoft.SharePoint.SPWeb.InitializeSPRequest() at Microsoft.SharePoint.SPWeb.EnsureSPRequest() at Microsof... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...t.SharePoint.SPWeb.SetAllowUnsafeUpdates(Boolean allowUnsafeUpdates) at Microsoft.SharePoint.SPPageParserNativeProvider.<>c__DisplayClass1.<UpdateBinaryPropertiesForWebParts>b__0() at Microsoft.SharePoint.SPSecurity.RunAsUser(SPUserToken userToken, Boolean bResetContext, WaitCallback code, Object param) at Microsoft.SharePoint.SPPageParserNativeProvider.UpdateBinaryPropertiesForWebParts(Byte[]& userToken, Guid& tranLockerId, Guid siteId, Int32 zone, String webUrl, String documentUrl, Object& registerDirectivesData, Object& connectionInformation, Object& webPartInformation, IntPtr pWebPartUpdater) at Microsoft.SharePoint.Library.SPRequestInternalClass.EnableModuleFromXml(String bstrSetupDirectory, String bstrFeatureDirectory, String bstrUrl, String bstrXML, Boolean fForceUng... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...host, ISPEnableModuleCallback pModuleContext) at Microsoft.SharePoint.Library.SPRequestInternalClass.EnableModuleFromXml(String bstrSetupDirectory, String bstrFeatureDirectory, String bstrUrl, String bstrXML, Boolean fForceUnghost, ISPEnableModuleCallback pModuleContext) at Microsoft.SharePoint.Library.SPRequest.EnableModuleFromXml(String bstrSetupDirectory, String bstrFeatureDirectory, String bstrUrl, String bstrXML, Boolean fForceUnghost, ISPEnableModuleCallback pModuleContext) at Microsoft.SharePoint.SPModule.ActivateFromFeature(SPFeatureDefinition featdef, XmlNode xnModule, SPWeb web) at Microsoft.SharePoint.Administration.SPElementDefinitionCollection.ProvisionModules(SPFeaturePropertyCollection props, SPSite site, SPWeb web, SPFeatureActivateFlags activateFlags, Boole... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...an fForce) at Microsoft.SharePoint.Administration.SPElementDefinitionCollection.ProvisionElements(SPFeaturePropertyCollection props, SPWebApplication webapp, SPSite site, SPWeb web, SPFeatureActivateFlags activateFlags, Boolean fForce) at Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, SPFeatureActivateFlags activateFlags, Boolean fForce) at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly) at Microsoft.SharePoint.SPFeatureCollection.CheckSameScopeDependency(SPFeatureDefinition featdefDependant, SPFeatureDependency featdep, SPFeatureDefinition featdefDep... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...endency, Boolean fActivateHidden, Boolean fUpgrade, Boolean fForce, Boolean fMarkOnly) at Microsoft.SharePoint.SPFeatureCollection.CheckFeatureDependency(SPFeatureDefinition featdefDependant, SPFeatureDependency featdep, Boolean fActivateHidden, Boolean fUpgrade, Boolean fForce, Boolean fMarkOnly, FailureReason& errType) at Microsoft.SharePoint.SPFeatureCollection.CheckFeatureDependencies(SPFeatureDefinition featdef, Boolean fActivateHidden, Boolean fUpgrade, Boolean fForce, Boolean fThrowError, Boolean fMarkOnly, List`1& missingFeatures) at Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, SPFeatureActivateFlags activateFlags, Boolean force, Boolean fMarkOnly) at Microsoft.SharePoint.S... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...PFeature.ActivateDeactivateFeatureAtSite(Boolean fActivate, Boolean fEnsure, Guid featid, SPFeatureDefinition featdef, String urlScope, String sProperties, Boolean fForce) at Microsoft.SharePoint.SPFeature.ActivateDeactivateFeatureAtScope(Boolean fActivate, Guid featid, SPFeatureDefinition featdef, String urlScope, Boolean fForce) at Microsoft.SharePoint.PowerShell.SPCmdletEnableFeature.UpdateDataObject() at Microsoft.SharePoint.PowerShell.SPCmdlet.ProcessRecord() at System.Management.Automation.CommandProcessor.ProcessRecord() at System.Management.Automation.CommandProcessorBase.DoExecute() at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automati... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...on.PipelineOps.InvokePipeline(Object input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext funcContext) at lambda_method(Closure , Object[] , StrongBox`1[] , InterpretedFrame ) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0 arg0) at System.... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1 clause, Object dollarUnderbar, Object inputToProcess) at System.Management.Automation.CommandProcessorBase.DoComplete() at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase commandRequestingUpstreamCommandsToStop) at System.Management.Automation.Internal.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.PipelineOps.InvokePipeline(Object input, Boolean ignoreInput, CommandParameterInternal[][] pipeElements, CommandBaseAst[] pipeElementAsts, CommandRedirection[][] commandRedirections, FunctionContext funcContext) at System.Management.Automation.Interpreter.ActionCallInstruction`6.R... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...un(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0 arg0) at System.Management.Automation.DlrScriptCommandProcessor.RunClause(Action`1 clause, Object dollarUnderbar, Object inputToProcess) at System.Management.Automation.CommandProcessorBase.DoComplete() at System.Management.Automation.Internal.PipelineProcessor.DoCompleteCore(CommandProcessorBase commandRequestingUpstreamCommandsToStop) at System.Management.Automation.Intern... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...al.PipelineProcessor.SynchronousExecuteEnumerate(Object input, Hashtable errorResults, Boolean enumerate) at System.Management.Automation.Runspaces.LocalPipeline.InvokeHelper() at System.Management.Automation.Runspaces.LocalPipeline.InvokeThreadProc() at System.Management.Automation.Runspaces.PipelineThread.WorkerProc() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart... 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    09/17/2014 11:04:27.72* PowerShell.exe (0x235C) 0x2B44 SharePoint Foundation Performance naqx Monitorable ...() 5b7cf973-1e3f-4985-bdf8-598eecf86ab6
    I tried finding something on the internet on this but either my Google Mojo is gone or there is no one else posting about this.
    Since we do not want to change the logging behavior of SharePoint (unless it is really necessary), I'd like to know if there is something wrong with my code? Is there some parameter I can use to suspend logging for this script? There must be something I'm
    doing horribly wrong :(
    Thanks in advance!
    (If anything I posted is unclear please let me know since English isn't my first language)
    EDIT: There is nothing productive happening on that farm. There is a web application for the MySites and one for a publishing portal (without any significant content).

    It's because MS did a poor job on the SharePoint object model. You shouldn't need to call a 'dispose' method on any object in .Net, the garbage collector should automatically identify a no-longer required object and remove it. Unfortunately, and there might
    be a reason for it, that isn't true for SPWeb or SPSite objects.
    Evidently the Enable-SPFeature comandlet contains a SPSite or SPWeb object and fails to dispose of it.
    You could try using Start-SPAssignment: http://technet.microsoft.com/en-us/library/ff607664%28v=office.15%29.aspx which some have found to be useful to deal with this. Another option would be to create a process that generates a new thread for each row in
    the csv which will result in the objects being destroyed as that process ends.

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Some key figures are not being correctly reversed in the Change Log

    Hi Experts,
    I'm working with the BI 7 (SP 15) and I have created an ODS with Overwrite option (Record Mode equal to ' ') and a Cube receiving data from this ODS.
    Whenever I have a change in one existing record in R/3 side (already previously loaded to BW), it comes perfectly to the ODS and overwrite the record in the Active Data. However, in the change log table, the before image record is not reversing all the key figures, there is one (Quant type) which is not working and it is being doubled in the cube. All the other key figures are Amount type and are working fine. PLease see below some examples of the Change Log behavior:
    First time the record comes - Change log content:
    Material   Valid Dt   Cost Value      Quantity Rec. Mode      Req. ID
    1234     May/01   $  100.00       10          'N'     1
    Second time when the record changes the Date:
    Material     Date      Cost Value      Quantity Rec. Mode      Req. 1234     May/29   $  100.00       10          '  '     2
    1234     May/01   $  -100.00       10          'X'     2
    Then the final result in the cube, after it is compressed, is:
    Material     Date      Cost Value      Quantity
    1234     May/29   $  100.00       20      
    Also, I have a Counter in the transformation to the Cube which is a constant equal to 1, it is also being aggregated erroneously. I could create a routine to check the record mode coming, but I'm not sure it is the best solution.
    Can anybody help me to figure this out, please?
    Thanks in advance.

    Hi Rajkumar,
    Actually it is a generic extractor (customized) and the ODS active data is getting the delta records correctly. After I extract from R/3 and activate the ODS, the active table has correct results, but the change log table has the problem described before. All the fields comes from one single extractor.
    The 0RECORDMODE that I mentioned is about the setting in the transformation to the ODS. You have the Technical rule group where you can set the 0RECORDMODE, in my case it is set to ' ' (blank) which means it will receive After-Images from the extractor (which I think is correct in my case). The most curious is that it is working for almost all key figures (all amount type), except for the Price Unit which is Quantity type.
    Any other ideas?
    Thanks.

  • Logging CFMail usage

    Running ColdFusion 7.0.2 Enterprise, I have not had any luck
    with the CFMail logging options.
    In particular, I have selected "Log all mail messages sent by
    ColdFusion" but it doesn't seem to work. I have checked the default
    CF log folder as well as the folder I specified under logging
    configuration, but there is no log of the mail sent.
    The logging worked fine on our previous servers running CF
    7.0.1. Any ideas? Does this feature work correctly for other
    people, or is it just me?

    After doing some more tests, I discovered that I can get the
    mail to be logged normally by configuring ColdFusion to use the
    default log location.
    But, if I try to use a log directory on a different disk, the
    mail is not logged, and there is some other incorrect logging
    behavior such as logs cannot be displayed in CF Admin and "Maximum
    File Size" does not work.

  • ESB DB adapter and error logging

    Hi !
    I need a DB adapter in ESB to read records from a table and do a logical delete (update a status column) .
    I can get it to work om my laptop, but not in our dev machines.
    I noticed that if I change the the 'mcf-properties' in the wsdl-file for the READ function like this
    mcf.DriverClassName="oracle.jdbc.OracleDriver" mcf.PlatformClassName="oracle.toplink.platform.database.oracle.Oracle10Platform"
    mcf.ConnectionString="jdbc:oracle:thin:@localhost:1521:ORCLAAAAAA"
    mcf.UserName="soatest" mcf.Password="A932C53E63FFDE3D4A8267B2FCE4A0044C1B70BFD42DD194"
    so that is wrong I never get any error in any log-file I found.
    This makes is very tiresome to debug.
    Is there a log where I can find an error message ?
    If not , what should I do ?

    Hello Everyone,
    I am stuck with the ESB's "logging" behavior. Due to some problems we had increased the Logging level to FINEST for some and as a result there were too many messages - mainly "Traces" -generated in /../oc4j/log.xml Now the problem is that i am seeing that ESB maintains only 10 to 12 log files.
    My query is does it archives the older log files somewhere- if yes then which location?
    -Or does it simply overwrites the files (that doesnt seems likely)?
    For example: While searching the OC4J Diagnostic Logs from Enterprise Manager Console it is showing that there are 10 selected log files.
    But where are the older log files. We need those badly as there are some issues we are trying to fix hinged on the Error Messages.
    Thanks
    --debashis                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Database log full

    what are the  option to free up the log?
    1. kill spid filling up log.
    2. backup the tran log - to disk or truncate only or no_log.
    3. change the database to simple recovery mode, delete ldf file and then revert back to full recovery mode.
    4. detach database, delete log file and reattach mdf again.
    I am especially interested to know if the last two are commonly used in case dba is not able to free up log using other ways.
    thanks.

    First find why the log file is not truncating,
    SELECT name,
    log_reuse_wait
    FROM sys.databases;
    The recovery model dictates log behavior and whether the log will truncate on checkpoint or requires a transaction log backup, full and bulk logged recovery requires that you backup the log to keep the log size manageable, refer
    here to the different kinds of backups and their affect on the T-Log.
    The action you take will be dictated by the reason for the log not truncating.  There are numerous reasons that the log does not truncate including replication, mirroring, transaction log backup, active transaction, etc.  BACKUP LOG TRUNCATE ONLY
    is no longer supported as it removes the ability to restore the database to a point in time.
    The last two options are last ditch efforts to regain control of the log file size and should only be done after a full backup of the database is taken to insure that a viable backup is available.
    David Dye My Blog

  • Asking for your help to Connect 4 Servers Toghether!

    Hi. for more than 3 days i'm trying to build multi server's cast Based on the attached model.
    Everything I've tried have not helped me.
    can you kindly help me to build the right " vhost " file for each server as described in the attached model ???
    I really appreciate it
    Thank you very much
    Elad.
    ----- THE VHOST -----
    <VirtualHost>
              <!-- This tag contains a list of <Alias> tags. -->
              <AliasList>
                        <!-- You can specify additional names by which to connect to this -->
                        <!-- virtual host by specifying the name(s) in one or more Alias  -->
                        <!-- tags. Use this tag if you want to be able to connect to this -->
                        <!-- virtual host with different names. For example, if the name  -->
                        <!-- of this virtual host is "abc.macromedia.com", but you wish   -->
                        <!-- to connect by simply specifying "abc", then you can specify  -->
                        <!-- an alias "abc". Note that "abc" must still map to the same   -->
                        <!-- IP address as "abc.macromedia.com". If more than one virtual -->
                        <!-- host on the same adaptor has defined the same alias, then    -->
                        <!-- the first match that is found is taken. This may result in   -->
                        <!-- unexpected behavior.                                         -->
                        <Alias name="alias1"></Alias>
              </AliasList>
              <!-- Specifies the applications directory for this virtual host. -->
              <!-- The applications directory is the base directory where all  -->
              <!-- applications for this virtual host is defined. An app is    -->
              <!-- considered to be defined if there exists a directory with   -->
              <!-- the application name. If nothing is specified in this tag,  -->
              <!-- the applications directory is assumed to be under the vhost -->
              <!-- directory.                                                  -->
              <AppsDir>${VHOST.APPSDIR}</AppsDir>
      <!-- You can override the settings specified in Server.xml for -->
      <!-- this vhost by uncommenting the tag below. You can disable -->
      <!-- auto-close idle clients for this vhost, or change the max -->
      <!-- idle time for clients connected to this vhost. If the max -->
      <!-- idle time is not specified here, or invalid (i.e. <= 0),  -->
      <!-- then we use whatever was set at the server level.         -->
      <!--
      <AutoCloseIdleClients enable="false">
        <MaxIdleTime>3600</MaxIdleTime>
      </AutoCloseIdleClients>
      -->
              <!-- Specifies max resource limits for this virtual host. -->
              <!-- Resource limits are only honored when running in vhost scope. -->
              <ResourceLimits>
                        <!-- Max number of clients that can connect to this vhost when running as local.  -->
                        <!-- enforced by License key -->
                        <MaxConnections>-1</MaxConnections>
                        <!-- Max number of clients that can connect to this vhost when running as remote. -->
                        <!-- This is enforced by License key -->
                        <MaxEdgeConnections>-1</MaxEdgeConnections>
                        <!-- Max number of app instances that can be loaded. -->
                        <MaxAppInstances>15000</MaxAppInstances>
                        <!-- Max number of streams that can be created. -->
                        <MaxStreams>250000</MaxStreams>
                        <!-- Max number of shared objects that can be created. -->
                        <MaxSharedObjects>50000</MaxSharedObjects>
                        <!-- GC interval for application instances resources in minutes : SharedObjects, Streams and Script engine. -->
                        <AppInstanceGC>1</AppInstanceGC>
              </ResourceLimits>
              <VirtualKeys>
                        <!-- Sets the virtual key mappings for connecting players.          -->
                        <!-- When a FlashPlayer or other connects, they receive a          -->
                        <!-- virtual key that corresponds to the ranges below                    -->
                        <!-- The virtualKey may be reset as a client property in the          -->
                        <!-- server script.  If no key is specified for a player          -->
                        <!-- it will not have a key applied by default: example                    -->
                        <!-- <Key from="WIN 7,0,19,0" to="WIN 9,0,0,0">A</Key>                    -->
              </VirtualKeys>
              <!-- This section specifies virtual directory mappings for resources -->
              <!-- such as recorded streams. By using virtual directories, you can -->
              <!-- share resources across different applications. If the beginning -->
              <!-- portion of a resource's uri matches the virtual directory that  -->
              <!-- is specified, then the storage location of the resource maps to -->
              <!-- the location specified by the virtual directory mapping. To     -->
              <!-- specify a virtual directory mapping, you first specify the      -->
              <!-- virtual directory, followed by a colon, followed by the actual  -->
              <!-- storage location. Finally the first item in the key mapping     -->
              <!-- is the virtual key mapping that corresponds to this directory   -->
              <!-- If the client attempting to play has a key matching this listed -->
              <!-- virtual key, it will take that virtual mapping, example:               -->
              <!-- <Streams key="virtualKey"><virtual dir>;<actual dir></Streams> -->
              <VirtualDirectory>
                        <!-- Specifies virtual directory mapping for recorded streams.   -->
                        <!-- To specify multiple virtual directory mappings for stream,  -->
                        <!-- add additional <Streams> tags; one for each virtual dir     -->
                        <!-- mapping. Syntax for virtual directories is as follows:      -->
                        <!-- <Streams key="virtualKey">foo;c:\data</Streams>.                      -->
                        <!-- This maps all streams whose virtual                               -->
                        <!-- key matches the listed key, if given and                                -->
                        <!-- names begin with "foo/" to the physical directory c:\data.  -->
                        <!-- For example, the stream named "foo/bar" would map to the    -->
                        <!-- physical file "c:\data\bar.flv". Similarly, if you had a    -->
                        <!-- stream named "foo/bar/x", then we first try to find a vdir  -->
                        <!-- mapping for "foo/bar". Failing to do so, we then check for  -->
                        <!-- a vdir mapping for "foo". Since there is one, the stream    -->
                        <!-- "foo/bar" corresponds to the file "c:\data\bar\x.flv".      -->
                        <!-- Virtual keys are optional, but if set allow more than one           -->
                        <!-- mapping                                                                        -->
                        <Streams></Streams>
              </VirtualDirectory>
              <!-- This tag specifies the primary DNS suffix for this vhost. If a  -->
              <!-- reverse DNS lookup fails to return the domain as part of the    -->
              <!-- hostname, then this tag is used as the domain suffix.           -->
              <DNSSuffix></DNSSuffix>
              <!-- This tag specifies a comma delimited list of domains that are   -->
              <!-- allowed to connect to this vhost. If this tag is empty, then    -->
              <!-- only connections from the same domain that is being connected   -->
              <!-- to will be allowed. If this tag is not empty, then only the     -->
              <!-- domains explicitly listed will be accepted. For example,        -->
              <!-- <Allow>macromedia.com, yourcompany.com</Allow> will only allow  -->
              <!-- connections from the macromedia.com & yourcompany.com domains.  -->
              <!-- If you wish to allow localhost connections, you will specify    -->
              <!-- "localhost". For example, <Allow>localhost</Allow>. To allow    -->
              <!-- all domains, specify "all".  For example, <Allow>all</Allow>.   -->
              <Allow>all</Allow>
              <Proxy>
                        <!-- A vhost may be configured to run apps locally or remotely.  -->
                        <!-- A vhost that is not explicitly defined gets aliased to      -->
                        <!-- the default vhost and is configured as such. A proxy server -->
                        <!-- runs all its apps remotely, while a normal server runs all  -->
                        <!-- its apps locally. The following parameter defines whether   -->
                        <!-- this vhost is running local or remote apps, the default is  -->
                        <!-- local. It may be set to either local or remote              -->
                        <Mode>local</Mode>
                            <!-- This setting specifies the time for which this server  -->
            <!-- wait for a response from the remote server before      -->
            <!--  timing out.  Time specified is in seconds. Default    -->
            <!--  value is 2 seconds.                                                   -->
                            <RequestTimeout>2</RequestTimeout>
                        <!-- Whether this is an anonymous proxy. An anonymous proxy does -->
                        <!-- not modify the incoming url. This way it does not disturb   -->
                        <!-- the routing of explicitly chained proxies. It is false by   -->
                        <!-- default, must be true for interception proxies.             -->
                        <Anonymous>false</Anonymous>
                        <!-- Proxy server disk cache settings                            -->
                        <CacheDir enabled="false" useAppName="true">
                                  <!-- Specifies the physical location of the proxy cache.  By default   -->
                          <!-- they are placed in cache/ in the server installation directory.   -->
                                   <!-- The value entered here must be an an absolute path; relative      -->
                                  <!-- paths will be ignored and will revert to the default directory.   -->
                                  <Path></Path>
                                  <!-- Specifies the maximum allowed size of the disk cache, in          -->
                                  <!-- gigabytes.  AMS does LRU cleanup of the cache to keep it under    -->
                      <!-- the maximum.  The default value is 32 GB.  A value of 0 will      -->
                                  <!-- disable the disk cache.                                                                   -->
                                  <MaxSize>32</MaxSize>
                        </CacheDir>
                        <!-- A proxy's outgoing connection can be bound to a specific    -->
                        <!-- local ip address. This allows for separating incoming and   -->
                        <!-- outgoing connections onto different network interfaces. This-->
                        <!-- is useful in configuring a 'Transparent' or 'Interception'  -->
                        <!-- proxy. If a LocalAddress is not specified, then outgoing    -->
                        <!-- connections bind to INADDR_ANY, which is the default.       -->
                        <!-- If a literal address is specified here, the IP version of literal -->
                        <!-- address must match the IP version of the Origin server's address. -->
                        <!-- The workaround is to use the hostname of the network interface    -->
                        <!-- (hostname with both A and AAAA DNS records) that will bind to     -->
                        <!-- either the IPv4 or IPv6 address of the interface.                 -->
                        <LocalAddress></LocalAddress>
                        <!-- This section specifies routing information. Administrators  -->
                        <!-- can configure how to route connections based on the desired -->
                        <!-- destination.                                                -->
                        <!-- The protocol attribute specifies the protocol to use for    -->
                        <!-- the outgoing connection. If specified, it must be set to    -->
                        <!-- either "rtmp" or "rtmps" to indicate a non-secure or secure -->
                        <!-- connection respectively. If nothing is specified, the       -->
                        <!-- out-going connection will use the same protocol as the      -->
                        <!-- in-coming connection. You can override this for each route  -->
                        <!-- entry by specifying a protocol tag attribute in each        -->
                        <!-- <RouteEntry> tag. If none is specified, it will use what is -->
                        <!-- configured in the <RouteTable> tag.                         -->
                        <RouteTable protocol="rtmp">
                                  <!-- Maps a host:port pair, to a different host:port pair.   -->
                                  <!-- This tag is in the form <host1>:<port1>;<host2>:<port2> -->
                                  <!-- where host1:port1 is the host and port of the desired   -->
                                  <!-- destination, and host2 and port2 is what should be used -->
                                  <!-- instead. In other words, connections to host1:port1 are -->
                                  <!-- routed to host2:port2 instead. For example,             -->
                                  <!-- <RouteEntry>foo:1935;bar:80</RouteEntry>                -->
                                  <!-- This says to route connections destined for host "foo"  -->
                                  <!-- on port 1935, to host "bar" on port 80.                 -->
                                  <!-- We also allow the use of the wildcard character '*' to  -->
                                  <!-- replace <host> and/or <port>. For example,              -->
                                  <!-- <RouteEntry>*:*;foo:1935</RouteEntry>                   -->
                                  <!-- This says route connections destined for any host on    -->
                                  <!-- any port to host "foo" on port 1935.                    -->
                                  <!-- '*' can also be used on the right-hand side. When used  -->
                                  <!-- on the right-hand side, it means that the corresponding -->
                                  <!-- value on the left-hand side should be used. For example -->
                                  <!-- <RouteEntry>*:*;*:80</RouteEntry>                       -->
                                  <!-- This says route connections destined for any host on    -->
                                  <!-- any port, to the same host on port 80.                  -->
                                  <!-- Additionally, you can also specify that a host:port     -->
                                  <!-- combination be routed to null, which essentially means  -->
                                  <!-- that connections destined for that host:port combo will -->
                                  <!-- be rejected. For example,                               -->
                                  <!-- <RouteEntry>foo:80;null</RouteEntry>                    -->
                                  <RouteEntry>1.1.1.1:1935;2.2.2.2:1935</RouteEntry>
                        </RouteTable>
                        <!-- This section configures edge auto-discovery. When an edge   -->
                        <!-- connects to another server, that server may be part of a    -->
                        <!-- cluster. This edge will try to determine which server in    -->
                        <!-- that cluster we should connect to (which may or may not be  -->
                        <!-- the server specified in the uri).                           -->
                        <EdgeAutoDiscovery>
                                  <!-- Specifies whether edge auto discovery is enabled (true) -->
                                  <!-- or disabled (false). Default is disabled.               -->
                                  <Enabled>true</Enabled>
                                  <!-- This specifies whether or not to allow overriding edge  -->
                                  <!-- auto-discovery (by specifying "rtmpd" protocol). If     -->
                                  <!-- enabled, edge auto-discovery is performed by default.   -->
                                  <AllowOverride>true</AllowOverride>
                                  <!-- Specifies how long to wait (msec) for auto-discovery.   -->
                                  <!-- Warning: don't set this too low. It must be long enough -->
                                  <!-- to establish a TCP connection, perform a UDP broadcast, -->
                                  <!-- collect the UDP resposnes, and return an XML response.  -->
                                  <WaitTime>1000</WaitTime>
                        </EdgeAutoDiscovery>
                        <!-- If this vhost is remote mode, and you wish to configure the -->
                        <!-- properties of an out-going ssl connection to an upstream    -->
                        <!-- server, then enable this section and configure SSL props    -->
                        <!-- appropriately. The absence of the <SSL> tag will mean that  -->
                        <!-- ssl connections to upstream servers will use the default    -->
                        <!-- configuration specified in the <SSL> section of Server.xml. -->
                        <!-- For more information on each of these tags, see comments in -->
                        <!-- Server.xml. Note: this section if uncommented is ignored if -->
                        <!-- proxy mode is local.                                        -->
                        <!--
                        <SSL>
                                  <SSLVerifyCertificate>true</SSLVerifyCertificate>
                                  <SSLCACertificatePath></SSLCACertificatePath>
                                  <SSLCACertificateFile></SSLCACertificateFile>
                                  <SSLVerifyDepth>9</SSLVerifyDepth>
                                  <SSLCipherSuite>ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH</SSLCipherSuite>
                        </SSL>
                        -->
                        <!-- When a VHost is configured as proxy, the "enabled"       -->
                        <!-- attribute will determine whether aggregate messages will -->
                        <!-- be delivered from the edge cache.  Default is "true".   -->
                        <!-- If the edge server receives aggregate messages from the  -->
                        <!-- origin when this setting is disabled, the messages will  -->
                        <!-- be broken up before being cached.                        -->
                        <AggregateMessages enabled="true">
                                  <!-- This setting determines the size (in bytes) of aggregate  -->
                                  <!-- messages returned from the edge cache (when aggregate     -->
                                  <!-- messages are enabled).  Note that this setting only       -->
                                  <!-- applies to messages retrieved from the disk cache.        -->
                                  <!-- Aggregate messages received directy from the origin server-->
                                  <!-- will be returned as-is, and therefore their size is       -->
                                  <!-- determined by the origin server's settings for aggregate  -->
                                  <!-- message size. Default is 65536                            -->
                                  <!-- <MaxAggMsgSize>65536</MaxAggMsgSize> -->
                                  <!-- Maximum Duration in Millisconds of an aggregate message while -->
                                  <!-- reading from edge disk cache. Default is 10000 Millisconds. -->
                                  <!-- <MaxAggMsgDuration>10000</MaxAggMsgDuration >                    -->
                        </AggregateMessages>
              </Proxy>
              <!-- This section controls some of the logging behaviors.                      -->
              <Logging>
                        <!-- This section controls access log.                                 -->
                        <Access>
                                  <!-- You can override the settings specified in Server.xml for -->
                                  <!-- this vhost by uncommenting the tag below. You can disable -->
                                  <!-- logging checkpoints for this vhost, or change the         -->
                                  <!-- checkpoint interval of this vhost. If the checkpoint      -->
                                  <!-- interval is not specified here, or invalid (i.e. <= 0),   -->
                                  <!-- then we use whatever was set at the server level.         -->
                                  <!--
                                  <Checkpoints enable="false">
                                            <LogInterval>3600</LogInterval>
                                  </Checkpoints>
                                  -->
                        </Access>
              </Logging>
    </VirtualHost>

    You've now posted at least three times now and have been given the same answer every time. You've also had your posts editied for personal information every time.
    We cannot help you get past the Activation Lock, only the person to whom it is locked can. We cannot help you unlock the phone from a carrier; only the carrier to which it is locked can.

  • Sample rate and data recording rate on NI Elvis

    I am currently working on a project that requires me to record my data at 1ms intervals or less. Currently the lowest timing interval I can record is at 10ms. If I change my wait timer to anything below 10 the recorded data in excel will skip time. For example instead of it starting at 1 ms and counting 2,3,4,5,6...,etc. It is skipping from 2,5,12,19,....,etc. So my question is if it is a limitation that I have reached on the NI Elvis or if it is possibly a problem with how I've created my LabVIEW code. My program from an operational stand point is working great, but it is my data recording that is causing me to not be able to move to my testing phase. Any help on this matter would be greatly appreciated.
    Other information that might be relevant:
    Operating System: Windows 7
    Processor: Intel(R) Xeon(R) CPU E31245 @ 3.00 GHz
    Memory: 12GB
    DirectX Version: 11
    Attachments:
    Count Digital(mod12).vi ‏76 KB

    Hi crashdx,
    So my immediate thought on this issue is that the code inside your primary while loop might be taking too long to process to achieve such a high sample rate. Especially when making calls into external applications (such as Excel) which can take a large amount of itme. 
    There is a very useful debugging tool called the Performance and Memory tool. If you aren't familiar with this tool, it will allow you to see how much memory the various chunks of your code are using and, more importantly here, how much time each subVI is taking to execute. Does the code inside your while loop take longer than 1ms to run? If so, then you will definitely see unwanted logging behavior and will need to change your approach. Would it be possible to collect more than a single sample at a time and perform calculations on a large number of samples at once before writing them to Excel in bigger chunks?
    I've included a link to the LabVIEW helpful detailing the Profile Performance and Memory tool.
    http://zone.ni.com/reference/en-XX/help/371361H-01/lvdialog/profile/
    I would first try and figure out how long it's taking your loop code to execute and go from there.
    I hope this helps!
    Andy C.
    Applications Engineer
    National Instruments

  • Strategy to control hidden features

    I am adding some auditing capability to a plug-in so the customer support can collect runtime information from the user while performing troubleshooting.
    To control the auditing/logging behavior, I need to know certain information, such as if the logging should be enabled, and where to save the audit file. I could have collect these info from the UI, but we are trying to make this a "hidden" feature, meaning the user will not see the feature from the UI.
    So what is the best strategy (cross-platform) to pass parameters to a plug-in to alter its behavior, while it is started? Environment variables? Any existing examples?
    Thanks,
    Nick

    Dirk,
    Thanks for the reply. It is a very practical solution. But just for discussion, is it possible to check for certain environment variables in the plug-in?
    To make this solution work better, I need to pick a place for this folder. What's the most convenient way to find out the path to the user's home directory or InDesign's installation directory?
    Thank you,
    Nick

  • Windows 8.1 with creative cloud logged in makes Desktop slow, anyone else have this behavior?

    So I had this problem I was trying to figure out for the last week with a new laptop. When I installed the laptop everything was lightning fast. After I finished installing all the software, Windows desktop was slow, particularly starting applications like task manager would show a blank dialog for 30 seconds. Outlook, would be laggy and CrashPlan Desktop application would take 2 minutes to get past the splash screen.
    To figure this out, I created a 2nd user on the system and everything was fast again. So I started copying data from one user to the other. When I started connecting applications, I noticed the Desktop slow again after logging into Create Cloud. I then logged out and checked. Definitely has to do with being logged into Creative Cloud. Anyone else have this behavior?

    I have tried lots of fixes including renaming files. The only thing that works for me is to remove the following windows updates:
    KB2995388
    KB2975719
    Turn  off auto updates before you start as you need to reboot afterwards.
    I set auto update to the download but let me choose option otherwise these 2 re install themselves.
    I'm hanging on for our free upgrade to Win 10 now....

  • Strange Solaris 8 behavior and warnings in messages log

    Hello,
    Solaris 8 4/01 has strange behavior on Ultra 10. After an aleator period after log in, without running an application, window controls dissapear and the machine hangs up.
    here are a few lines from messages log:
    Aug 20 11:53:54 paris su: [ID 105066 kern.notice] NOTICE: su1: silo overflow
    Aug 20 11:53:54 paris
    Aug 20 11:53:54 paris su: [ID 643653 kern.notice] NOTICE: su1: ring buffer overflow
    Aug 20 11:53:54 paris
    Aug 20 11:55:55 paris su: [ID 643653 kern.notice] NOTICE: su1: ring buffer overflow
    Aug 20 17:27:32 paris unix: [ID 839527 kern.notice] sched:
    Aug 20 17:27:33 paris unix: [ID 294280 kern.notice] software trap 0x7f
    Aug 20 17:27:34 paris unix: [ID 101969 kern.notice] pid=0, pc=0xf0050e14, sp=0x2a10001b2e1, tstate=0x8800001401, context=0x0
    Aug 20 17:27:36 paris unix: [ID 743441 kern.notice] g1-g7: 10037c7c, 2, 1, 7, 0, 0, 2a10001fd40
    Aug 20 17:27:37 paris unix: [ID 100000 kern.notice]
    Aug 20 17:27:38 paris genunix: [ID 723222 kern.notice] 000000001040c110 unix:sync_handler+150 (1041a970, 10400000, 0, 0, 0, 0)
    Aug 20 17:27:40 paris genunix: [ID 179002 kern.notice] %l0-3: 0000000000001602 0000000000000016 000000000000000e 0000000010009e90
    Aug 20 17:27:40 paris %l4-7: 0000000000000000 0000000000000000 000000000000000c 000000001040c140
    Aug 20 17:27:45 paris genunix: [ID 723222 kern.notice] 000000001040c1e0 unix:prom_rtt+0 (10000000, 16, f0000000, 1041b130, 3000005f548, 0)
    Aug 20 17:27:48 paris genunix: [ID 179002 kern.notice] %l0-3: 0000000000000002 0000000000001400 0000008800001401 0000000010026e4c
    Aug 20 17:27:48 paris %l4-7: 00000000f0050dc0 00000000f00676a8 000000000000000c 000000001040c290
    Aug 20 17:27:52 paris genunix: [ID 723222 kern.notice] 000000001040c330 unix:client_handler+2c (f0067138, 2a10001bc48, 20, 10428288, 1, 1041a970)
    Aug 20 17:27:55 paris genunix: [ID 179002 kern.notice] %l0-3: 000000001041a788 0000000000000000 000000001040e400 0000000000000001
    Aug 20 17:27:55 paris %l4-7: 0000000000000016 000000000000000e 0000000000000016 000000000023f0e0
    Aug 20 17:28:00 paris genunix: [ID 723222 kern.notice] 000002a10001bb90 unix:prom_enter_mon+28 (0, c, b, 300001b7000, 0, 1013c6b4)
    Aug 20 17:28:02 paris genunix: [ID 179002 kern.notice] %l0-3: 0000000010026ad4 0000000000000016 0000000000000009 0000000010009c78
    Aug 20 17:28:02 paris %l4-7: 000000001041b130 0000000000000016 0000000000000001 000002a10007d7f0
    Aug 20 17:28:07 paris genunix: [ID 723222 kern.notice] 000002a10001bc60 unix:debug_enter+d0 (0, 30000fdf910, 0, 10, 0, 3000006a098)
    Aug 20 17:28:09 paris genunix: [ID 179002 kern.notice] %l0-3: 000003000002da00 0000030000f70000 0000000000000000 00000300016943f0
    Aug 20 17:28:09 paris %l4-7: 0000000000000000 00000300016943e0 00000300007a1f28 0000000000000000
    Aug 20 17:28:13 paris genunix: [ID 723222 kern.notice] 000002a10001bd30 su:async_rxint+218 (30000fdf910, 3000107a088, 1000, ff, 0, 2000)
    Aug 20 17:28:16 paris genunix: [ID 179002 kern.notice] %l0-3: 000000001013c6b4 000003000107a089 000003000105a086 0000000000000001
    Aug 20 17:28:16 paris %l4-7: 0000030000fdf910 00000000000000f6 000003000105a000 0000000000006000
    Aug 20 17:28:21 paris genunix: [ID 723222 kern.notice] 000002a10001bde0 su:asyintr+134 (6, 30000fdf910, 6, 10441de0, 10072068, 300001b7000)
    Aug 20 17:28:24 paris genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 00000000001298dd 0000000000000000 000003000006a088
    Aug 20 17:28:24 paris %l4-7: 00000300000578c8 000003000078fea8 0000000000000000 000003000078fed0
    Aug 20 17:28:28 paris genunix: [ID 723222 kern.notice] 000002a10001be90 pcipsy:pci_intr_wrapper+64 (1047e2d8, 7e9, 1, 30000074148, 30000063208, 300001303c8)
    Aug 20 17:28:32 paris genunix: [ID 179002 kern.notice] %l0-3: 000000001026fca4 0000000000000000 0000000000000000 0000030000073c28
    Aug 20 17:28:32 paris %l4-7: 000000001158ff20 0000000000000000 0000000000000000 0000000000000000
    Aug 20 17:28:36 paris genunix: [ID 723222 kern.notice] 000002a10001bf50 unix:current_thread+44 (2a10001fd40, 1041b130, 300011de010, 10423840, 16, 0)
    Aug 20 17:28:39 paris genunix: [ID 179002 kern.notice] %l0-3: 0000000010007450 000002a10001f151 000000000000000c 000002a10001bf50
    Aug 20 17:28:39 paris %l4-7: 0000000000000000 0000000000000000 0000000000000000 000002a10001fa00
    Aug 20 17:28:44 paris genunix: [ID 723222 kern.notice] 000002a10001faa0 unix:idle+54 (1040f850, 0, 0, 1041b130, 3000005f548, 0)
    Aug 20 17:28:46 paris genunix: [ID 179002 kern.notice] %l0-3: 00000000100416c4 0000000000000000 0000000000000000 000002a1000b7d40
    Aug 20 17:28:46 paris %l4-7: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
    Aug 20 17:28:51 paris unix: [ID 100000 kern.notice]
    Aug 20 17:28:51 paris genunix: [ID 672855 kern.notice] syncing file systems...
    Aug 20 17:29:04 paris genunix: [ID 433738 kern.notice] [2]
    Aug 20 17:29:04 paris genunix: [ID 733762 kern.notice] 12
    Aug 20 17:30:47 paris genunix: [ID 433738 kern.notice] [2]
    Aug 20 17:37:27 paris last message repeated 9 times
    Aug 20 17:37:29 paris genunix: [ID 616637 kern.notice] cannot sync -- giving up
    Aug 20 17:37:30 paris genunix: [ID 353387 kern.notice] dumping to /dev/dsk/c0t0d0s1, offset 214827008
    Aug 20 17:37:38 paris genunix: [ID 409368 kern.notice] 100% done: 10582 pages dumped, compression ratio 3.80,
    Aug 20 17:37:38 paris genunix: [ID 851671 kern.notice] dump succeeded
    Any ideea what is wrong with my machine ?
    Thank you,
    Razvan

    Hi Chandra,
    unfortunately I can't stop the whole database just because of this "little" problem. And don't even know if it really helps. Now I'm installing the same SW on VMWare server. Hope, that it will help me reproduce this problem and then I can try this patch.
    Thank you, however, for your suggestion.
    Robert

  • Strange behavior of email notification: where is the log?

    Hi,
    The email notifications have a strange behavior in our productive system. When the system process whatever subscription only the firsts four notification are sent by email even thought  all the inbox notifications are sent. I checked the mail server and only four smtp connections are sent..... In the development system it is working well, I mean all the emails are sent using the same mail server....  I marked the option Enable session debug info in the EMAIL channel but I see nothing in the los (default.trc).
    Is there any log to check what it is happened? Where is the log of session debug info?
    Thanks
    Antonio

    Thanks for you fast replay.
    I would like to increase the logging of Notificator (com.sapportals.wcm.service.notificator.wcm) to DEBUG  but in NW7 SP13 I do not find that logging locations.
    Could you give any hit on it?
    thanks
    Antonio
    By the way... I found  the following warning:
    JMX connector exception occurred while processing external JMX request [ JMX request (java) v1.0 len: 314 |  src: cluster target-node: 8573000 req: invoke params-number: 4 params-bytes: 0 | :name=com.sap.portal.prt.bridge.service.mbeans.PRTMBeanRuntime,j2eeType=PRTBridge_JMX_SECTION,SAP_J2EEClusterNode=8573000,SAP_J2EECluster=""... [see details]

Maybe you are looking for

  • Ipod not showing any music or video files!

    I just spent hours loading about 2 gb worth of songs and videos onto my new 5G ipod last night. When i turned the ipod on this morning, there were no files (no songs, playlists, video, etc.) However, when i check "settings-->about", the info is showi

  • Can anybody help me set this code into an array?

    Thanks, can you help me understand how to make an array work? I need to fade toggle a different country png for each country button and fade out the rest of the country images i.e.: when i click the UK button i need to fade toggle the UK png and fade

  • How to transfer my contacts from iphone 3gs to macbook pro?

    hello to all, i am a new user , if anyone knows please help me, in order to transfer my contacts from iphone 3GS to my macbook pro. thanks very much in advance.

  • Change the SMTP 550 non delivery message

    Hi, I want to change the texte of the non delivery report (SMTP code 550)  send by the email appliance.  What is the simple way to do that ? Moreover, i would like to know how long the applicance will keep the message in queue in case of my internal

  • Sap Lumira Addon - Blank Story in BI 4.1 Launchpad

    Hello, I am able to publish the report to BI  4.1 and HANA. When I view the story on Launch pad, the story is blank and then gives the following error "There was an internal error on the lumira server" Getting this error for all stories Error message