ColdFusion 11 REST warnings in log file?

I'm building a RESTful API with ColdFusion 11.  Today I noticed there are a ton of warnings being logged to the coldfusion-error log file.  For example:
Mar 04, 2015 8:09:52 AM com.sun.jersey.spi.inject.Errors processErrorMessages
WARNING: The following warnings have been detected with resource and/or provider classes:
WARNING: A HTTP GET method, public void api.Country.GetCountry() throws coldfusion.xml.rpc.CFCInvocationException, MUST return a non-void type.
WARNING: A HTTP GET method, public void api.Log.GetLog(java.lang.Double) throws coldfusion.xml.rpc.CFCInvocationException, MUST return a non-void type.
WARNING: A HTTP GET method, public void api.Logout.LogoutUser(java.lang.Double) throws coldfusion.xml.rpc.CFCInvocationException, MUST return a non-void type.
Here is one of my API endpoint functions:
<cfcomponent restpath="country" rest="true" output="false" extends="cfc.data">
     <cffunction name="getCountry" access="remote" output="false" httpmethod="get" returntype="void">
          <cfset LOCAL.countryData = getCountryData()>
          <cfset LOCAL.restResponse = StructNew()>
          <cfset restResponse = REQUEST.apiObj.response(200, countryData)>
          <cfset restSetResponse(restResponse)>
     </cffunction>
</cfcomponent>
My understanding is that returntype=void is a MUST in order to use restSetResponse().  This code works great, but the warnings in the log file don't make sense to me. 
Am I doing something wrong? 
Or is this some sort of "gap" where logging isn't factoring in REST services/functionality? 
Is it possible to disable warning messages from getting logged? 
Thank you!
Brian White

Is it possible that the application log on your production
server is not
configured to overwrite events as needed? Are other
non-ColdFusion
application events being logged?
Carl
aUniqueScreenName wrote:
> I'm using Coldfusion version 7,0,2,142559 on a windows
2003 server with IIS6
> (my prod box). A little over a month ago CF stopped
putting entries into the
> application.log file. Even when errors do occur that
should be logged
> application.log has nothing. I've gone through every
windows/cf/jrun log I
> could find and found nothing to indicate any problems at
the time of the last
> entry. Also no CF or windows updates were done around
that time.
>
> I have a development box setup exactly the same and it
does not exhibit the
> same problem. I can cause errors on the dev box and they
get logged in
> application.log. Yet when I do the same thing on my prod
box nothing gets
> logged. I've compared all the settings between the prod
and dev boxes and
> everything is the same.
>
> The prod server is functioning completely normal except
for the failure to put
> anything in application.log. I'm beyond pulling my hair
out on this one. Has
> anyone ecountered the same issue? My Google searches
have so far come up emtpy.
>
>

Similar Messages

  • How do I change the location of the coldfusion-out.log and coldfusion-error.log files in CF10

    When I change the log location in ColdFusion Administrator it changes the location of most but not all the log files.  I have a requirement from my customer to place all log files on a separate partition on the server.  For ColdFusion 9 I was able to modify the registry settings to change StandardOut and StandardErr for the ColdFusion Jrun service.  This does not appear to the case for ColdFusion 10 which now uses Tomcat 7.
    I tried modifying log4j.properties file and was able to relocate the hibernatesql.log, axis2.log, and esapiconfig.log but not the coldfusion-out.log.
    I am running ColdFusion 10 Enterprise Edition on a 64-bit Windows 2008 Server.

    The location of the rest of the ColdFusion logs can be changed in the ColdFusion Administrator.  Go to the Debugging and Logging section, Logging Settings.  There is a form at the top of the page where you can change the log storage location.
    -Carl V.

  • Warnings in the Upgrade Log file for OBIEE 11G from 10G

    Hi,
    I am new to OBI 11g.
    I have installed simple type OBI 11g (version - 11.1.1.5.0) on a single server(Windows 2003, RAM-4GB,CPU-2.8GHz Dual Core ). I used upgrade assistant to upgrade the RPD & Webcat from 10.1.3.4.1 version to 11g, but I got about 50K warnings in the upgarde log file for the web catalog. They were basically 2 types as below -
    1. [2012-02-24T13:17:48.255-05:00] [BIEE] [WARNING] [] [upgrade.BIEE] [tid: 14] [ecid: 0000JMk1k3lDkZKcyToYpk1FHuer000005,0] Invalid columnID reference 'c10' in views!
    Does someone know this warning and if this requires a fix or it would not impact any reports or dashboards? This error is about 25K times in the upgrade log file.
    2. [2012-02-24T12:20:04.162-05:00] [BIEE] [WARNING] [] [upgrade.BIEE] [tid: 14] [ecid: 0000JMk1k3lDkZKcyToYpk1FHuer000005,0] Removed the following node from the element 'view': [[
    <saw:selector xmlns:saw="com.siebel.analytics.web/report/v1.1" columnID="c20" prompt="true"><saw:choice type="column" formula="&quot;- Year&quot;.&quot;MVSR Doses R12M&quot;"><saw:tableHeading><saw:caption><saw:text>Month</saw:text></saw:caption></saw:tableHeading><saw:columnHeading><saw:caption><saw:text>Doses</saw:text></saw:caption></saw:columnHeading></saw:choice><saw:choice type="column" formula="&quot;- Year&quot;.&quot;MVSR Doses YTD&quot;-&quot;- Year&quot;.&quot;MVSR Doses Prior YTD&quot;"><saw:tableHeading><saw:caption><saw:text>Year</saw:text></saw:caption></saw:tableHeading><saw:columnHeading><saw:caption><saw:text>YTD Dose Volume Change</saw:text></saw:caption></saw:columnHeading></saw:choice></saw:selector>
    I have checked this error on Oracle support web and it says that this warning is fine as there have been some features in 10G that would not be in the OBI 11G and are not required. I am still not convinced why there should be any warning for this type of issue.
    Please let me know if someone has faced these warnings in when doing the upgrade of the 10g RPD and Webcat to 11g together.
    Just as note, the services start fine and I could login to the application with users and see the dashboards and reports although I did not check all the dashboards and reports as this is a POC before doing the actual upgrade.
    Edited by: user11255710 on Feb 24, 2012 1:44 PM

    Hi
    Can you please refer me to the links on oracle web support which explained that these warnings are ok and can be ignored. We are also facing the same issue.
    Thanks in advance.

  • Deployment Utility - log file errors/warnings not clear

    Hi,
    I'm doing my first deployment and have few questions:
    1) I tried to deploy ONLY the TS user directories. I checked the 'Deploy Files in TestStand User Directories' option and did not check 'Install TestStand Engine' since I deploy it into a system with TS3.0+CVI7.0 software(same as the source).
    The process was successful by I get the following warnings in the log file.
    Starting Log.
    Building...
    5:47 PM
    An installer is being created.
    The installer is finished
    The build process is done.
    5:48 PM
    Warning: You may need to add any sequence files referenced by the following expressions:
    "reportgen_" + RunState.Root.Locals.ReportOptions.Format + ".seq" in step 'Process Step Result', sequence 'SequenceFilePostResultListEntry', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    "reportgen_" + RunState.Root.Locals.ReportOptions.Format + ".seq" in step 'Process Step Result', sequence 'ProcessModelPostResultListEntry', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    "ReportGen_" + Parameters.ReportOptions.Format + ".seq" in step 'Get Report Footer', sequence 'TestReport', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    "ReportGen_" + Parameters.ReportOptions.Format + ".seq" in step 'Get Report Header', sequence 'TestReport', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    "ReportGen_" + Parameters.ReportOptions.Format + ".seq" in step 'Get Report Body (Sequence)', sequence 'TestReport', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    "ReportGen_" + Parameters.ReportOptions.Format + ".seq" in step 'Set Report Disabled Message', sequence 'TestReport', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    RunState.ProcessModelClient in step 'MainSequence Callback', sequence 'Single Pass', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    RunState.ProcessModelClient in step 'MainSequence Callback', sequence 'Test UUTs', sequence file 'C:\Program Files\National Instruments\TestStand 3.0\Components\User\Models\TELRAD_SequentialModel.seq'
    +++++++++++++++++++++++++++++++++++++++
    What does it mean? What to do about it?
    2) The utility does not include the cfg/TypePalettes where I have a my own file.
    Is there a reason to it? Do I need to include it manually?
    3) When I included workspace in the utility I get the following errors:
    Starting Log.
    Processing Workspace...
    Done processing workspace file
    +++++++++++++++++++++++++++++++++++++++
    Processing Workspace...
    Done processing workspace file
    Building...
    6:08 PM
    Error could not open LabVIEW
    Distributing VIs requires the LabVIEW Development System
    Class not registered
    in TestStand - Get LV Reference.vi->TestStand - Package VIs.vi->TestStand - Build.vi->TestStand - Distribution Wizard GUI.vi->TestStand - Deployment Utility Splash Screen.vi
    An installer was not created due to an error
    The build process is done.
    6:08 PM
    Error Code:-2147221164
    Class not registered
    in TestStand - Get LV Reference.vi->TestStand - Package VIs.vi->TestStand - Build.vi->TestStand - Distribution Wizard GUI.vi->TestStand - Deployment Utility Splash Screen.vi
    +++++++++++++++++++++++++++++++++++++++
    In order to eliminate the problem source I un-checked the files presented by the 'Analyze Source Files' until all of them are un-checked. Even in that case I get the error message.
    What does the message mean?
    What to do about it?
    I don't use any LV in my system!!!!!
    Thanks and my apology for the long message
    Rafi

    Hi Rafi,
    On #1 the warnings appear whenever you have an expression that specifies a sequence file because many expressions are not valid until runtime, you can ignore it as long as ALL sequences that expression may evaluate to are included in the Workspace.
    >2) The utility does not include the cfg/TypePalettes where I have a my own file.
    >Is there a reason to it? Do I need to include it manually?
    No it does not include it because it is not needed unless you plan to edit on the deployed sequence (generally not recommended). If you add a custom type to a sequence the sequence file will have a copy of the type. You can include the types palettes manually in the workspace if desired.
    >3) When I included workspace in the utility I get the following errors:
    >What does the message mean?
    The deployment utility thinks you have a VI to deploy, it is trying to load LabVIEW and failing because LabVIEW is not installed.
    >What to do about it? I don't use any LV in my system!!!!!
    Somewhere there is a .vi or .llb in the system. Find the VI(s) and uncheck them. I did find a bug that the deployment utility caches the flag indicating if a VI is present, but it is easily worked around: Save a tsd file, Press the New button and then reload the tsd.
    -Rick Francis

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • 500 Internal Server Error again pleas (including log file)..please help

    1. After i unziped and installed jdeveloper to:
    C:\Jdevstudio successfully
    2. Then i created the untitled1.jsp page ---> only showed Hello world and save
    3. the Embedded OC4J Server compiled successfully ..i thought this meant nothing wrong with the code.
    Successful compilation:0 errors, 0 warnings..
    4. But when i looked into the Embedded OC4J Server-Log
    i found the error :
    [Starting OC4J using the following ports: HTTP=8988, RMI=23891, JMS=9227.]
    C:\Jdevstudio\jdev\system\oracle.j2ee.10.1.3.39.84\embedded-oc4j\config>
    C:\Jdevstudio\jdk\bin\javaw.exe -client -classpath C:\Jdevstudio\j2ee\home\oc4j.jar;C:\Jdevstudio\jdev\lib\jdev-oc4j-embedded.jar -Dhttp.proxyHost=wwwcache.aber.ac.uk -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=*.aber.ac.uk|localhost|127.0.0.1|win2006 -Dhttps.proxyHost=wwwcache.aber.ac.uk -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=*.aber.ac.uk|localhost|127.0.0.1|win2006 -Xverify:none -DcheckForUpdates=adminClientOnly -Doracle.application.environment=development -Doracle.j2ee.dont.use.memory.archive=true -Doracle.j2ee.http.socket.timeout=500 -Doc4j.jms.usePersistenceLockFiles=false oracle.oc4j.loader.boot.BootStrap -config C:\Jdevstudio\jdev\system\oracle.j2ee.10.1.3.39.84\embedded-oc4j\config\server.xml
    [waiting for the server to complete its initialization...]
    14 ส.ค. 2551 15:30:33 com.evermind.server.XMLDataSourcesConfig parseRootNode
    INFO: Legacy datasource detected...attempting to convert to new syntax.
    14 ส.ค. 2551 15:30:34 com.evermind.server.jms.JMSMessages log
    INFO: JMSServer[]: OC4J JMS server recovering transactions (commit 0) (rollback 0) (prepared 0).
    14 ส.ค. 2551 15:30:34 com.evermind.server.jms.JMSMessages log
    INFO: JMSServer[]: OC4J JMS server recovering local transactions Queue[jms/Oc4jJmsExceptionQueue].
    Ready message received from Oc4jNotifier.
    Embedded OC4J startup time: 7813 ms.
    Target URL -- http://144.124.120.45:8988/Application1-view-context-root/faces/untitled1.jsp
    51/08/14 15:30:38 Oracle Containers for J2EE 10g (10.1.3.1.0) initialized
    This caused 500 Internal Server error
    5. Then i went to Tools--> Preferences--> Deployment--> unchecked Bundle Default data-source.xml During Deployment...
    I got less error showing:
    C:\Jdevstudio\jdk\bin\javaw.exe -jar C:\Jdevstudio\j2ee\home\admin.jar ormi://144.124.120.45:23891 oc4jadmin **** -updateConfig
    14 ส.ค. 2551 15:33:58 com.oracle.corba.ee.impl.orb.ORBServerExtensionProviderImpl preInitApplicationServer
    WARNING: ORB ignoring configuration changes. Restart OC4J to apply new ORB configuration.
    Ready message received from Oc4jNotifier.
    Embedded OC4J startup time: 8109 ms.
    Target URL -- http://144.124.120.45:8988/Application1-view-context-root/faces/untitled1.jsp
    This still caused 500 Internal Server Error
    6. I still did know the real problems then i read the installation guide of jdev
    is it necessary to set the JavaHome in C:\Jdevstudio\jdev\bin\jdev.conf
    or do i need to set ORACLEHome in this file
    7. from the log file the server called Jdk under Jdevstudio\jdk\bin\javaw.exe -jar
    it seemed fine...
    8. Anything wrong with admin.jar....
    9. is the problem from setting not setting jdk (javahome) or admin.jar...or OC4J problems please help
    i would appreciate you help
    thank you

    i deleted pesistence directory like what you said but i found the bigger error in the log ...
    [Starting OC4J using the following ports: HTTP=8988, RMI=23891, JMS=9227.]
    C:\Jdevstudio\jdev\system\oracle.j2ee.10.1.3.39.84\embedded-oc4j\config>
    C:\Jdevstudio\jdk\bin\javaw.exe -client -classpath C:\Jdevstudio\j2ee\home\oc4j.jar;C:\Jdevstudio\jdev\lib\jdev-oc4j-embedded.jar -Dhttp.proxyHost=wwwcache.aber.ac.uk -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=*.aber.ac.uk|localhost|127.0.0.1|win2006 -Dhttps.proxyHost=wwwcache.aber.ac.uk -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=*.aber.ac.uk|localhost|127.0.0.1|win2006 -Xverify:none -DcheckForUpdates=adminClientOnly -Doracle.application.environment=development -Doracle.j2ee.dont.use.memory.archive=true -Doracle.j2ee.http.socket.timeout=500 -Doc4j.jms.usePersistenceLockFiles=false oracle.oc4j.loader.boot.BootStrap -config C:\Jdevstudio\jdev\system\oracle.j2ee.10.1.3.39.84\embedded-oc4j\config\server.xml
    [waiting for the server to complete its initialization...]
    14 ส.ค. 2551 17:05:46 com.evermind.server.jms.JMSMessages log
    SEVERE: Failed to set the internal configuration of the OC4J JMS Server with: XMLJMSServerConfig[file:/C:/Jdevstudio/jdev/system/oracle.j2ee.10.1.3.39.84/embedded-oc4j/config/jms.xml]
    java.lang.InstantiationException: The system cannot find the path specified
         at com.evermind.server.jms.JMSUtils.make(JMSUtils.java:1072)
         at com.evermind.server.jms.JMSUtils.toInstantiationException(JMSUtils.java:1237)
         at com.evermind.server.jms.JMSServer.recoverState(JMSServer.java:1831)
         at com.evermind.server.jms.JMSServer.internalSetConfig(JMSServer.java:209)
         at com.evermind.server.jms.JMSServer.setConfig(JMSServer.java:182)
         at com.evermind.server.ApplicationServer.initializeJMS(ApplicationServer.java:2412)
         at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.java:955)
         at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLauncher.java:131)
         at java.lang.Thread.run(Thread.java:595)
    Caused by: java.io.IOException: The system cannot find the path specified
         at java.io.WinNTFileSystem.createFileExclusively(Native Method)
         at java.io.File.createNewFile(File.java:850)
         at com.evermind.server.jms.ServerFile.safeOpenFile(ServerFile.java:775)
         at com.evermind.server.jms.ServerFile.access$000(ServerFile.java:77)
         at com.evermind.server.jms.ServerFile$2.run(ServerFile.java:719)
         at oracle.oc4j.security.OC4JSecurity.doUnprivileged(OC4JSecurity.java:325)
         at com.evermind.server.jms.ServerFile.openFile(ServerFile.java:716)
         at com.evermind.server.jms.ServerFile.<init>(ServerFile.java:133)
         at com.evermind.server.jms.PersistentMap.loadFile(PersistentMap.java:100)
         at com.evermind.server.jms.PersistentMap.<init>(PersistentMap.java:61)
         at com.evermind.server.jms.JMSServer.recoverState(JMSServer.java:1823)
         ... 6 more
    51/08/14 17:05:46 *** (SEVERE) Failed to set the internal configuration of the OC4J JMS Server with: XMLJMSServerConfig[file:/C:/Jdevstudio/jdev/system/oracle.j2ee.10.1.3.39.84/embedded-oc4j/config/jms.xml]
    14 ส.ค. 2551 17:05:46 com.evermind.server.ServerMessages severeJmsServerStartupException
    SEVERE: JMS: Failed to set the internal configuration of the OC4J JMS Server with: XMLJMSServerConfig[file:/C:/Jdevstudio/jdev/system/oracle.j2ee.10.1.3.39.84/embedded-oc4j/config/jms.xml]
    Ready message received from Oc4jNotifier.
    Embedded OC4J startup time: 7719 ms.
    Target URL -- http://144.124.120.45:8988/Application1-view-context-root/faces/untitled1.jsp
    51/08/14 17:05:50 Oracle Containers for J2EE 10g (10.1.3.1.0) initialized
    anything problems that came from admin.jar..????

  • Could not open log file in Win2000 sp3 Message in console

    I have configured log file for ias. Then restarted my Application server. It is giving Error message "Could not open log file ''logs/logVal.10817" in kas window. But logs files are there (created by the server itself) in the logs floder.
    I have configured logs for two systems.
    One system is noted all messages in the file
    Second system did not not any messages.
    But the files are existed in the logs folder in both the systems.
    I need to configure logs for ias,Kjs and KXS also.
    Please suggest me regarding this.
    thanks
    sudheer

    Hi,
    I'm not sure what operation are you trying to, can you please confirm that ? Please check what kind of messages did you try to log, is it only errors ? or all errors & warning or all messages ?. If it were only errors and warnings, then there is a possibility that the server did not encounter any of this, due to which the log file can be empty.
    Regards
    Raj

  • Warning Capture in Log file using SSIS 2008

    While doing insert from datasource1 (flat file/ xml source) to datasource2(Oledb) , some data is getting truncated . The User want to capture the log in some log file.
    The field which got truncated along with the row details
    and number of truncation which has occured.
    I observe in flat file source we Redirect Row for Error and Truncation, but how to capture which row got truncated
    and what value got truncated.
    and it need to written in notepad/txt file

    The User wan't to insert length of specific character and discard the rest. I think redirect rows transfer the failed row so we won't have records inserted into the target db.
    could you please suggest how to achieve the requirement.

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Warning Messages in the log file

    Hi,
    I see following warning message in my log files. Can any one help me what exactly this means?
    "Skipping grouping rule '(null)' in profile 'Global_Profile_Records_Management_FieldGroup'. The grouped field 'xCategoryID' is a parent "
    Thanks,
    Vidya

    See: Skipping Grouping Rule 'General' In Profile - Warnings [ID 1202354.1]     
    Cause:
    p51044545 Hiding dDocName and dSecurityGroup with IsGroup set throws IdocScript error
    Solution:
    Unchecking the IsGroup flag will avoid the reporting of the mentioned warnings
    -ryan

  • Multiple log files using Log4j

    Hello,
    I want to generate log files based on package structure. Like com.temp.test in test.log ,also I am having a log file at application like app.log .
    This is my requirement what has been logged in test.log should not be logged in app.log.This is my log4j.properties file.
    # Log4j configuration file.
    # Available levels are DEBUG, INFO, WARN, ERROR, FATAL
    # Default logger
    log4j.rootLogger=DEBUG, PFILE
    log4j.logger.com.temp.test=DEBUG,TEST
    # PFILE is the primary log file
    log4j.appender.PFILE=org.apache.log4j.RollingFileAppender
    log4j.appender.PFILE.File=./App.log
    log4j.appender.PFILE.MaxFileSize=5120KB
    log4j.appender.PFILE.MaxBackupIndex=10
    #log4j.appender.PFILE.Threshold=DEBUG
    log4j.appender.PFILE.layout=org.apache.log4j.PatternLayout
    log4j.appender.PFILE.layout.ConversionPattern=%p %d[%l][%C] %m%n
    #log4j.appender.PFILE.layout.ConversionPattern=%p %d %m%n
    log4j.appender.TEST=org.apache.log4j.RollingFileAppender
    log4j.appender.TEST.File=./test.log
    log4j.appender.TEST.MaxFileSize=5120KB
    log4j.appender.TEST.MaxBackupIndex=10
    log4j.appender.TEST.layout=org.apache.log4j.PatternLayout
    log4j.appender.TEST.layout.ConversionPattern=%p %d[%l][%C] %m%n
    Can u help me!!!

    You have to configure the temp logger so that it does not send its info on to the root logger.
    For this, you can use the additivity flag.
    # Default logger
    log4j.rootLogger=DEBUG, PFILE
    log4j.additivity.com.temp.test=false
    log4j.logger.com.temp.test=DEBUG,TESTThe rest of the file remains the same.

  • Log files in DB folder

    Hello!
    Does anyone know if it is necessary to keep all of the .log files in the \zenworks\inv\db folder? There are numerous .log files, created daily, from the very first day that we installed the database on the server (over 2 years ago). Each file is 128k in size. It would be nice if we could delete some or most of them to free up the disk space being used.
    Thanks!
    Larry

    Larry,
    It appears that in the past few days you have not received a response to your posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
    - Check all of the other support tools and options available at http://support.novell.com in both the "free product support" and "paid product support" drop down boxes.
    - You could also try posting your message again. Make sure it is posted in the correct newsgroup. (http://support.novell.com/forums)
    If this is a reply to a duplicate posting, please ignore and accept our apologies and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Log file sync waits

    10.2.0.2 aix 5.3 64bit archivelog mode.
    I'm going to attempt to describe the system first and then outline the issue: The database is about 1Gb in size of which only about 400Mb is application data. There is only one table in the schema that is very active with all transactions inserting and or updating a row to log the user activity. The rest of the tables are used primarily for reads by the users and periodically updated by the application administrator with application code. There's about 1.2G of archive logs generated per day, from 3 50Mb redo logs all on the same filesystem.
    The problem: We randomly have issues with users being kicked out of the application or hung up for a period of time. This application is used at a remote site and many times we can attribute the users issues to network delays or problems with a terminal server they are logging into. Today however they called and I noticed an abnormally high amount of 'log file sync' waits.
    I asked the application admin if there could have been more activity during that time frame and more frequent commits than normal, but he says there was not. My next thought was that there might be an issue with the IO sub-system that the logs are on. So I went to our aix admin to find out the activity of that file system during that time frame. She had an nmon report generated that shows the RAID-1 disk group peak activity during that time was only 10%.
    Now I took two awr reports and compared some of the metrics to see if indeed there was the same amount of activity, and it does look like the load was the same. With the same amount of activity & commits during both time periods wouldn't that lead to it being time spent waiting on writes to the disk that the redo logs are on? If so, why wouldn't the nmon report show a higher percentage of disk activity?
    I can provide more values from the awr reports if needed.
              per sec          per trx
    Redo size:     31,226.81     2,334.25
    Logical reads:     646.11          48.30
    Block changes:     190.80          14.26
    Physical reads:     0.65          0.05
    Physical writes:     3.19          0.24
    User calls:     69.61          5.20
    Parses:          34.34          2.57
    Hard parses:     19.45          1.45
    Sorts:          14.36          1.07
    Logons:          0.01          0.00
    Executes:     36.49          2.73
    Transactions:     13.38
    Redo size:     33,639.71      2,347.93
    Logical reads:     697.58          48.69
    Block changes:     215.83          15.06
    Physical reads:     0.86          0.06
    Physical writes:     3.26          0.23
    User calls:     71.06          4.96
    Parses:          36.78          2.57
    Hard parses:     21.03          1.47
    Sorts:          15.85          1.11
    Logons:          0.01          0.00
    Executes:     39.53          2.76
    Transactions:     14.33
                        Total          Per sec          Per Trx
    redo blocks written           252,046      70.52           5.27
    redo buffer allocation retries      7           0.00           0.00
    redo entries                167,349      46.82           3.50
    redo log space requests      7           0.00           0.00
    redo log space wait time      49           0.01           0.00
    redo ordering marks           2,765           0.77           0.06
    redo size                111,612,156      31,226.81      2,334.25
    redo subscn max counts      5,443           1.52           0.11
    redo synch time           47,910           13.40           1.00
    redo synch writes           64,433           18.03           1.35
    redo wastage                13,535,756      3,787.03      283.09
    redo write time                27,642           7.73           0.58
    redo writer latching time      2           0.00           0.00
    redo writes                48,507           13.57           1.01
    user commits                47,815           13.38           1.00
    user rollbacks                0           0.00           0.00
    redo blocks written           273,363      76.17           5.32
    redo buffer allocation retries      6           0.00           0.00
    redo entries                179,992      50.15           3.50
    redo log space requests      6           0.00           0.00
    redo log space wait time      18           0.01           0.00
    redo ordering marks           2,997           0.84           0.06
    redo size                120,725,932      33,639.71      2,347.93
    redo subscn max counts      5,816           1.62           0.11
    redo synch time           12,977           3.62           0.25
    redo synch writes           66,985           18.67           1.30
    redo wastage                14,665,132      4,086.37      285.21
    redo write time                11,358           3.16           0.22
    redo writer latching time      6           0.00           0.00
    redo writes                52,521           14.63           1.02
    user commits                51,418           14.33           1.00
    user rollbacks                0           0.00           0.00Edited by: PktAces on Oct 1, 2008 1:45 PM

    Mr Lewis,
    Here's the results from the histogram query, the two sets of values were gathered about 15 minutes apart, during a slower than normal activity time.
    105     log file parallel write     1     714394
    105     log file parallel write     2     289538
    105     log file parallel write     4     279550
    105     log file parallel write     8     58805
    105     log file parallel write     16     28132
    105     log file parallel write     32     10851
    105     log file parallel write     64     3833
    105     log file parallel write     128     1126
    105     log file parallel write     256     316
    105     log file parallel write     512     192
    105     log file parallel write     1024     78
    105     log file parallel write     2048     49
    105     log file parallel write     4096     31
    105     log file parallel write     8192     35
    105     log file parallel write     16384     41
    105     log file parallel write     32768     9
    105     log file parallel write     65536     1
    105     log file parallel write     1     722787
    105     log file parallel write     2     295607
    105     log file parallel write     4     284524
    105     log file parallel write     8     59671
    105     log file parallel write     16     28412
    105     log file parallel write     32     10976
    105     log file parallel write     64     3850
    105     log file parallel write     128     1131
    105     log file parallel write     256     316
    105     log file parallel write     512     192
    105     log file parallel write     1024     78
    105     log file parallel write     2048     49
    105     log file parallel write     4096     31
    105     log file parallel write     8192     35
    105     log file parallel write     16384     41
    105     log file parallel write     32768     9
    105     log file parallel write     65536     1

  • Mapping Errors Log file to be sent via FTP

    Hi All,
    Functional specs of a file to file scenario require to create an aditional log file containing the file name, creation date and a list with the lines were a problem occurred with an error description and then send it to R3 via FTP.
    Does anyone know if it's possible or not? and if it's possible, how could I do that?
    Thanks in advance.
    Cheers.

    Daniel,
    This is possible.
    1. To get the Source File name and and appned the date to it, you can use Adapter Specific Identtifers -- File Name in the Sender and receiver file adapter and in the message mapping, set the file name using this code,
    2. Rest of error handling can error record creation for the error file can be handled via the mapping itself.
    String newfilename="";
    DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);
    DynamicConfigurationKey key = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","FileName");
    // Get Sourcefilename
    String oldfilename=conf.get(key);
    //get the date
    java.text.SimpleDateFormat dateformat = new java.text.SimpleDateFormat( "yyyyMMdd" );
    dateformat.format( new java.util.Date() );
    //append source+date+L
    String nwfilename=oldfilename+dateformat;
    conf.put(key, newfilename);
    Regards,
    Bhavesh

  • Log File Error - Is this my Responsibility?

    Hi server admin people,
    I'm a customer who is trying to kill an application hosted by
    my ISP. I've removed and deleted all related directories and files
    on my ISP's server and told them I don't need ColdFusion support
    anymore. However, they want to continue charging me because their
    server is getting bogged down by calls to a cfm page that no longer
    exists. The error log file is below. I don't know how to fix the
    problem. Should I have to? I guess my expectation would be that
    this is some attack my ISP should know how to handle. Greatly
    appreciate anyone taking the time to look and advise. Here are the
    repeating lines in the server's log file my ISP provided me with:
    "Error","jrpp-611","12/24/08","19:32:35",,"File not found:
    /Intranet/processaskquestion.cfm The specific sequence of files
    included or processed is:
    F:\InetPub\Webhost\Testing\Intranet\processaskquestion.cfm "
    coldfusion.runtime.TemplateNotFoundException: File not found:
    /Intranet/processaskquestion.cfm
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:77)
    at
    coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:47)
    at
    coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:52)
    at
    coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
    at
    coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:35)
    at
    coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:43)
    at
    coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.CfmServlet.service(CfmServlet.java:105)
    at
    jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:91)
    at
    jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at
    jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:252)
    at
    jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:527)
    at
    jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:192)
    at
    jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:348)
    at
    jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:451)
    at
    jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:294)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
    Sincerely, Paul.

    If you make *any* request for *any* ColdFusion file {.cfm,
    .cfc, .cfr,
    etc} on *any* web site on a web server configured with
    ColdFusion, you
    will get a ColdFusion error by default.
    The default behavior is for the Web Server to handle in such
    request to
    ColdFusion if the file exists or not. There are settings that
    can
    change this, but that can turn of some of ColdFusion's
    functionality.
    So, yes I think this is the ISP's issue.

Maybe you are looking for

  • How do I add a divider on my menu?

    I've created a menu for my website, but I can't seem to find the option to add dividers between menu items. I would like to do something like this: http://www.googleventures.com/

  • Can't view computers in My Network Places- WRT160N

    I just replaced a D-Link router with a WRT160N.  Now I can no longer see the other computers (all wired) in My Network Places.  All I see is the WRT160N.  What do I need to do to see the other computers on the network?

  • How to get organisation unit for given date.

    Hi Everyone, I have created a report to display all the hired and left employees for the given date. I have to display department wise.       CALL FUNCTION 'RH_DIR_ORG_STRUC_GET'         EXPORTING          ACT_ORGUNIT           = S1_ORGEH-LOW        

  • Director 11 - Any exporter improvements?

    Now that Adobe has announced Director 11, will there be any improvements in the situation with modeling and animation applications exporting W3D? This has been a weak spot for a long time now. I've been banging my head against the wall trying to use

  • The software to be installed requires Administrator or higher level access

    I am installing a new epson print driver on a G4 Quicksilver tower 10.4.11. When I try to install it, I get this message. "The software to be installed requires Administrator or higher level access privileges." I am stumped how to correct that issue.