Suggestion for default arch logging behavior

Okay let me throw this out there and see what comes back...
Every Linux distribution I have used (apart from arch) has a boot log file enabled by default, or provides a simple Yes/No flag to turn the feature on.
Arch does not.  Moreover, I have searched the forums and posted a question or two myself about the topic.  No one seems to know the answer, and a lot of responses seem to indicate that, amongst the arch community, the possibility of a boot log file is treated like: a) something that no one in their right mind would want, or b) something that maybe, might be useful, but that no one can figure out how to implement.
What is this?  What is going on?  Arch has a bootlogd binary in /sbin/.  Presumably there is no reason why it won't actually work, but no one seems to know where to put the call to the binary, or why they would put it there.
This seems like an easy-to-add feature that a developer or moderator (anyone who really knows the system layout well) could make the default for future installs.  It just makes a text file that is a) not large, and b) can be very useful at times.
So how about it?  Why not make this part of the default arch logging set up?  (and of course explain what you did that worked.)

tomk:
Sorry to disclose my ignorance, but I'm not sure how to do that.
What I have been doing though is trying to see in more detail how it is done on a debian system which I have access to.  Here is what I've learned so far:
There are not that many references to bootlogd on the system, so it might be possible (for me) to track down what is going on:
root@wave32p:/etc# locate bootlogd
/etc/default/bootlogd
/etc/init.d/bootlogd
/etc/init.d/stop-bootlogd
/etc/init.d/stop-bootlogd-single
/sbin/bootlogd
/usr/share/man/man8/bootlogd.8.gz
1) "/etc/default/bootlogd" must be edited (trivially) such that a "No" becomes a "Yes"  -- this seems like just a master switch.
2) the man files are exactly the same on the two systems, and the output of "/sbin/bootlogd -v" is the same on the two systems, however the size of the bootlogd binary itself is not the same on both systems (larger on debian64 system).  not sure what to make of that, but it is not what I was hoping to see.
3) the script "/etc/init.d/bootlogd" runs with (eg.) a "start/stop" flag, the same as most "functions" under arch that have scripts associated with them.
4) it would seem that I have to grind my way through the above script if I'm going to make any progress.  I'm doing that in my spare time at the moment, though it's a challenge since it's been a few years since I've written bash scripts on a regular basis.  FYI, here is the /etc/init.d/bootlogd script verbatim (additional note:  the option -r (below) is supported on the arch version of the bootlogd binary, but the -c option does not seem to be...interesting?  Here is the man page entry for -c: "Attempt  to  write to the logfile even if it does not yet exist.  Without this option, bootlogd will wait for the logfile to appear before attempting to write to it.  This behavior prevents
              bootlogd from creating logfiles under mount points."):
#! /bin/sh
### BEGIN INIT INFO
# Provides: bootlogd
# Required-Start: mountdevsubfs
# X-Start-Before: hostname keymap keyboard-setup procps pcmcia hwclock hwclockfirst hdparm hibernate-clean
# Required-Stop:
# Default-Start: S
# Default-Stop:
# Short-Description: Start or stop bootlogd.
# Description: Starts or stops the bootlogd log program
# which logs boot messages.
### END INIT INFO
PATH=/sbin:/bin # No remote fs at start
DAEMON=/sbin/bootlogd
[ -x "$DAEMON" ] || exit 0
NAME=bootlogd
DESC="boot logger"
BOOTLOGD_OPTS="-r -c"
[ -r /etc/default/bootlogd ] && . /etc/default/bootlogd
. /lib/init/vars.sh
. /lib/lsb/init-functions
# Because bootlogd is broken on some systems, we take the special measure
# of requiring it to be enabled by setting an environment variable.
case "$BOOTLOGD_ENABLE" in
[Nn]*)
exit 0
esac
# Previously this script was symlinked as "stop-bootlogd" which, when run
# with the "start" argument, should stop bootlogd. Now stop-bootlogd is
# a distinct script, but for backward compatibility this script continues
# to implement the old behavior.
SCRIPTNAME=${0##*/}
SCRIPTNAME=${SCRIPTNAME#[SK]??}
ACTION="$1"
case "$0" in
*stop-bootlog*)
[ "$ACTION" = start ] && ACTION=stop
esac
case "$ACTION" in
start)
# PATH is set above
log_daemon_msg "Starting $DESC" "$NAME"
if [ -d /proc/1/. ]
then
umask 027
start-stop-daemon --start --quiet --exec $DAEMON -- \
$BOOTLOGD_OPTS
ES=$?
else
$DAEMON $BOOTLOGD_OPTS
ES=$?
fi
log_end_msg $ES
stop)
PATH=/bin:/sbin:/usr/bin:/usr/sbin
log_daemon_msg "Stopping $DESC" "$NAME"
start-stop-daemon --oknodo --stop --quiet --exec $DAEMON
ES=$?
sleep 1
log_end_msg $ES
if [ -f /var/log/boot ] && [ -f /var/log/boot~ ]
then
[ "$VERBOSE" = no ] || log_action_begin_msg "Moving boot log file"
# bootlogd writes to boot, making backup at boot~
cd /var/log && {
chgrp adm boot || :
savelog -q -p -c 5 boot \
&& mv boot.0 boot \
&& mv boot~ boot.0
ES=$?
[ "$VERBOSE" = no ] || log_action_end_msg $ES
fi
restart|force-reload)
/etc/init.d/bootlogd stop
/etc/init.d/bootlogd start
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload|status}" >&2
exit 3
esac

Similar Messages

  • User Defined Metric for default ALERT log directory

    On our system we have moved the alert log to a non-default location. If we use the wrong initialization file or something else goes haywire then trace files and alert logs get placed in the $ORACLE_HOME/rdbms/log directory.
    I want to create a user defined metric for each machine that will look in all the $ORACLE_HOME/rdbms/log directories for each ORACLE_HOME on the server checking for any alert*.log or *.trc files. Creating an alert if it encounters any of these files.
    I don't know if I should do it as a host UDM or an instance UDM. And more to the point how do I get it to see the multiple homes if the server has more than one.
    Any ideas would be appreciated.
    Thanks
    Tim

    Well I did it with a host UDM calling a local script (which will be installed on shared drives for the development/test and production systems).
    The local script cats the /var/opt/oracle/oratab file eliminating lines beginning with # or $ then takes the second argument of each line and loops through all entries using a checkit procedure. The checkit procedure determines if the appropriate log directores exist for the oracle home and does a find on the diretory looking for alert*.log and *.trc. If there are any then global variables get updated with the count and directory name.
    There is an if statement before exit that checks of the count of files is greater than zero and write an appropriate em_result and em_message depending on the results.
    I then created the UDM in EM to call this script and check for critical/warning thresholds.
    Regards
    Tim

  • Design Suggestions for Default Web Template

    I am starting to develop some web applications for 7.x and do not want to use the default BEx Web Template only because it offers so much functionality that either is too complicated for users, not needed by users, or we don't want them using.
    I am wondering how to approach this effort in developing a good default web template.  Does it make sense to use a default template since each query can be so different?
    Does anyone have any suggestions about what to include, exclude and why?  Any details about the template you designed would be greatly appreciated!
    Thanks

    Hi,
    Please refer the following URL:
    http://help.sap.com/saphelp_nw04/helpdata/en/44/b26a3b74a4fc31e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/281a3c9c004866e10000000a11402f/frameset.htm
    Thanks,
    Venkat

  • Different instructions for disable arch log mode on 11Gr2 RAC server?

    Hello all,
    I've run into a problem where I've lost my tape drive...and have no sysadmins to help.
    I don't want my RAC instances to run our of space and halt, so I'm planning to take them out of archive log mode, and just do exports daily till I can move them or get tape going again.
    This is easy enough with a non-clustered instance, but I'm reading around and finding conflicting information for doing it on a RAC system.
    In the Oracle® Real Application Clusters Administration and Deployment Guide
    11g Release 2 (11.2)...it states in simple terms:
    (http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/rman.htm#i474611)
    "In order for redo log files to be archived, the Oracle RAC database must be in ARCHIVELOG mode. You can run the ALTER DATABASE SQL statement to change the archiving mode in Oracle RAC, because the database is mounted by the local instance but not open in any instances. You do not need to modify parameter settings to run this statement."
    and thats about it.
    I've been researching and found a couple of other non-oracle official guides to this, where they describe a much more involved process, that seems to follow this path:
    1. sqlplus into one instance and change the cluster_database=false scope=spfile sid='specific_node_name';
    2. Shut down all instances, srvctl stop database -d <instance_name>
    3. Startup the instance you changed cluster_database on with sqlplus and startup mount;
    4 On this instance you do ALTER DATABASE NOARCHIVELOG;
    5. On same instance change the cluster parameter back: alter system set cluster_database=true scope=spfile sid='specific_node_name';
    6. Shut down this single instance
    7. Start all instances with srvctl start database -d <instance>
    I've found references to this at:
    http://oracle-dba-yi.blogspot.com/2010/12/enabledisable-archive-log-mode-in-11gr2.html
    and
    http://www.dba-oracle.com/bk_disable_archive_log_mode.htm
    Among other sites. I'm curious why the Oracle documentation on this doesn't mention all these steps?
    I'm guessing the longer version is the path I should take, but I wanted to ask here first if this is correct?
    I'm on Oracle 11Gr2....hasn't been patched with latest patchset, running on RHEL5, and is a 5 node cluster.
    Thank you in advance,
    cayenne
    Edited by: cayenne on Oct 21, 2011 11:51 AM

    Fiedi Z wrote:
    There are couple things you need to consider
    - export daily is not a backup strategy
    - you're risking your enterprise company by disabling archivelog
    your company have 5 RAC nodes I assume this is mid-to-large company, question you might ask to yourself Is your company really desperately don't have any available disk space for you to backup to temporary location / server?
    However if you still insist and persistent with your strategy then follow the links you have, that is how to disable archivelog in RAC
    CheersThank you everyone for the comments.
    This is a DEV environment...and they are planning to move this all to a new facility where we won't have the powerouttages and old defunct equipment.
    Right now...I do not have drive space to put all of this. I've informed them of the risks of not having point in time recovery. I really don't see any other choice on this....I don't want to run noarchive either...but I've been without tape to move the logs off for days now..and even with low traffic, I'm afraid they will fill and I'll have databases halting.
    I think at this point, and again, this is not production data....I'm going to have to go with daily exports and that will have to do me till I can get these servers 'moved' to a new facility soon.
    Again, thank you for the comments!!!
    cayenne

  • ASM - arch logs

    I am using asm for the arch log storage in a two node cluster. I have set parameter log_archive_format to '%t_%s_%r.arc'. But the logs are not getting generated according to the format. Why is that?
    Thanks.

    Do you use ASM?
    log_archive_format....
    LOG_ARCHIVE_FORMAT - If you set the LOG_ARCHIVE_FORMAT to an incomplete ASM filename (such as +dgroupA), Oracle will ignore it.                                                                                                                                                                                                                                                                                                                                                           

  • Looking for multiple graph logging suggestions

    I have a "host program" that is attached to a cRIO platform used for precision controlling of some heavy machinery.  The current host is used to set parameters, but primarily to read values and display trends for the operators and maintenance personnel.  I had previously been looking for a method to put a correct timestamp on my historical charts, which allow the user to scroll back if something odd occurs, but this has become more trouble than it is worth.  One of my main difficulties in all of this is that I have 6 synchronized graphs that need the "roll back" ability, and putting a live timestamp on a chart you must be able to pause and rewind is unfathomably difficult with LabVIEW...
    So, I have decided to kill 2 birds with 1 stone, so to speak.  I needed to work out logging the graphed signals to file anyhow, so I decided I might as well just do this and externally reference the log files for the "rewind" functionality.  My problem now is finding the best approach to log these 6 graphs of differing types...  The first graph has 3 overlaid analog values, the second graph has 2 overlaid values, and the last 4 are each just a single individual (scaled) sensor reading.  The graphs are updated every 100ms.  Accurate timestamps (date and time) need to be attached to the data as well.
    This logging is a new arena of LabVIEW for me, but I figured there has to be at least a few experienced users or developers out there who know all of the ins and outs.  Any suggestions on the best way for me to log this data?  Individual files for each chart, or is there a better method to combine the data to keep it cleaner (not multiple files to track)?  Are there pitfalls I should be watching out for?

    Take a look at tdms files.  They are a binary format that is open (www.ni.com/white-paper/5696/en).  You can define groups of channels and they can be logged at different rates.  The LabVIEW API makes it relatively easy to retrieve a channel or group, although it may be more challenging to find a specific time if your files are large.  Besides the examples included with LabVIEW, you can drill down into the TDMS File Viewer that's in the TDMS palette for another example.  Also, there is a plug in for Excel that will allow it to import TDMS files (zone.ni.com/devzone/cda/epd/p/id/2944).

  • Where is the default location for the OC4J logs?

    Where is the default location for the OC4J logs?

    Depends on what variant of the product you are using.
    In an Oracle Applicaiton Server environment (ie the one you install versus unzip) the stdout/stderr streams from OC4J are captured in the $ORACLE_HOME/opmn/logs/<group-instance> directory.
    If you are using standalone then stdout goes to the console where you started the process.
    The logs for the OC4J instance itself are located in $ORACLE_HOME/j2ee/home/log/<instance_group> directory. In this directory you'll see different logs for differet areas of the server. Most of the logs however are captured in the oc4j sub-directory in the log.xml file. Whenever you enable an OC4J component logger in the j2ee-logging.xml file or via ASControl, this is where the logs will end up.
    Take a look at the $ORACLE_HOME/j2ee/home/config/j2ee-logging.xml file -- the log handlers will give you some idea on the different log files that are in use.
    You can configure your application loggers to also direct their log into this file if you wish. See http://buttso.blogspot.com/2007/09/directing-log-messages-into-oc4j.html for an example.

  • TS1702 I brought iPad for my nephew logged in with my id and download iWork's which was free. Now as it was against my id I deleted the app and changed default I'd as his, now I try to install iWork's it is asking for payment though iWork's is free on thi

    I brought iPad for my nephew logged in with my id and download iWork's which was free. Now as it was against my id I deleted the app and changed default I'd as his, now I try to install iWork's it is asking for payment though iWork's is free on this ipad.

    That's because the free apps are already registered to your Apple ID. You can't use them with another Apple ID.

  • Need Suggestion for Archival of a Table Data

    Hi guys,
    I want to archive one of my large table. the structure of table is as below.
    Daily there will be around 40000 rows inserted into the table.
    Need suggestion for the same. will the partitioning help and on what basis?
    CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
    LOAN_NUMBER VARCHAR2(10 BYTE),
    LOAN_XML CLOB,
    LOAN_UPDATE_DT TIMESTAMP(6),
    JMS_TIMESTAMP TIMESTAMP(6),
    INSERT_DT TIMESTAMP(6)
    TABLESPACE DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    LOB (LOAN_XML) STORE AS
    ( TABLESPACE DATA
    ENABLE STORAGE IN ROW
    CHUNK 8192
    PCTVERSION 10
    NOCACHE
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOCACHE
    NOPARALLEL;
    do the needful.
    regards,
    Sandeep

    There will not be any updates /deletes on the table.
    I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
    After loading the data from original table to archived table i will truncating the original table.
    If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month???

  • ORA-00313: open failed for members of log group 3 of thread 1

    Whenever I try to login as a user I get the following:
    sqlplus user/user
    SQL*Plus: Release 10.2.0.2.0 - Production on Fri Nov 9 10:43:39 2007
    Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    So It occurs to me to login as sysdba and restart the DB …
    SQL> connect sys/manager as sysdba
    Connected.
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 536870912 bytes
    Fixed Size 1281264 bytes
    Variable Size 150995728 bytes
    Database Buffers 377487360 bytes
    Redo Buffers 7106560 bytes
    Database mounted.
    ORA-00313: open failed for members of log group 3 of thread 1
    ORA-00312: online log 3 thread 1: '/u06/oradata/RKDB/redo03.log'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 2: No such file or directory
    Additional information: 3
    SQL> quit
    I now realized what happened and how this happened. During a clean-up effort this file was accidentally deleted and unfortunately we don’t have any backups.
    I am willing to lose the data. Is there something I can do so that the startup does not try to open this file ?
    All I am able to do now is to mount the database but not open it.
    Thanks in advance,
    Daniel

    this is what I get now ...
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u02/oradata/RKDB/system01.dbf'
    SQL> Recover database until cancel;
    ORA-00279: change 608619 generated at 11/09/2007 10:00:41 needed for thread 1
    ORA-00289: suggestion : /u05/oradata/RKDB/arch/log1_33_633207859.arc
    ORA-00280: change 608619 for thread 1 is in sequence #33
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log
    '/u05/oradata/RKDB/arch/log1_33_633207859.arc'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u02/oradata/RKDB/system01.dbf'
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u02/oradata/RKDB/system01.dbf'
    SQL>

  • Default redo log 100mb???????

    hi all
    my database is runing in archive mode
    (database oracle 9i rel 2)
    when i issue
    SQL> SELECT * FROM V$LOG;
    GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
    1 1 59 104857600 1 NO CURRENT 3519698 20-FEB-04
    2 1 57 104857600 1 YES INACTIVE 3477638 20-FEB-04
    3 1 58 104857600 1 YES INACTIVE 3479786 20-FEB-04
    its funny default size of redo log [b]is 100MB
    every archive log file is created on my database of size 100mb.
    1.whats the reason of deafult size of 100mb
    2.its my production database and i want to change its size to 1mb
    plz give some suggestion to change it.because my boss don''t want any mistake.
    if solution is drop and recreate then it give me trouble all time when i do this. so plz give suggestion for change its size
    thanks
    kuljeet pal singh

    The sizes for the Redo Logs members are determined by several issues:
    1.- The switching time that you want.
    2.- The switching time is determined according what is the amount of time ( This represent data ) that you are disposed to lose in case of you lose all redo members of one Redo Log group.
    3.- The switching time is determined according as well to the sizes for Archive Redo Logs to store and handle them in a confortable way.
    4.- In some occasions if you have redo members with low size like 1m , that can affect the performance of your database.
    5.- when they are too long the database has to wait while they archives redo logs are generated.
    If you want to change the size of redo members you have to create new redo log groups and after go removing the groups that you do not want. When you are doing this you have to have at least 2 Redo Log Groups.
    Joel Pérez
    http://otn.oracle.com/experts

  • Jrun default event log errors

    Does anyone see any familiar issues by just looking at this
    default-event.log from the Jrun server? We are running CFMX6.1
    updater.
    Thanks for your suggestions
    Emmanuel

    You can review this
    thread.
    It tells you what is probably going on. However, a simpler method
    (then the one Sean mentions) to fix the issue is to simply scope
    your variables. Ideally all variables are initialized before they
    are called or you use cfparam to initalize them. But you should
    always scope the variables, even if they are in the variables scope
    so CF does not go searching thru all the scopes (including CGI) as
    Sean discusses. So the tops of your pages should be full of:
    <cfparam name="variables.foo" type="string"> etc.
    This is especially critical on pages that initialize many
    variables as with fusebox and other frameworks. If you have dozens
    of unscoped cfparam tags on a single page, that page goes
    scrambling to find each variable in all the scopes normally
    searched. The CGI scope is maintained by the webserver, so CF must
    query it. Scope your variables including those in the variables
    scope, whenever possible.

  • What security products are suggested for scrubbing rootkits from a Mac? There are good articles on similar repair for PCs and it makes me want to see if I can save this machine. It's in forensic recovery right now so I myself have not done anything yet.

    What security products are suggested for scrubbing rootkits from a Mac? There are good articles on similar repairs for other makes online. I would like to investigate whether a machine can be truly scrubbed or if it's best to retire it. I haven't done anything yet as it is a candidate for more extensive forensic recovery.
    Also, I am not sure if various malicious spoofing and cloaking tricks (making Wi-Fi appear off when it is on, hiding unauthorized sharing/remote access, falsifying System Preferences preference panes, etc.) are resolved by a thorough drive erase or are more similar to APTs?
    Finally, is there any emerging information regarding APT hiding places other than the recovery partition? I have heard mention of the EFI, for example, but it seems unproven and unlikely. Some people have also mentioned the RAM.
    This is an upsetting topic to some people, including me, so I appreciate circumspect, measured responses. Thanks! And don't try to answer all my questions if you really just want to comment or answer one. All thoughts are appreciated.

    Hi, Lincoln,
    A straightforward question. You are correct in recognizing the difference between tentative conclusion and certainty. Here are our main reasons:
    1. Incoming items noted on the console (or console sub logs) and Activity Monitor after defenses are overcome, and which are brought in by an unwelcome remote user, often have a process name and the word "kit." (Bear with me.) We soon observe the process is under attack, from terminal evidence and soon, decreased or lost functionality of the process. The terminal generally reports alteration of specific kernel behaviors. A simple example (that may or may not be accompanied by kernel changes and may simply alter permissions) is modifying Disk Utility such that key uses are unavailable. You can see how an attacker might value disabling partition views, mounting and permission repair. In retrospect, DU might not be a root alteration. I was thinking that its relation to fsck flagged it as a possible ring 0 item. I may need to know core parameters of a good example to pick strong ones.
    2. Incoming folders hidden for possible later use contained bundles of similar root kits, including some not applicable to Macs. From what I have read from reasonably credible sources, root kits are sold and traded both singly and in bundles.
    3. Root kits are a logical next choice for our attackers, as various prior techniques hindered us but did not paralyze us.
    4. One of the most authoritative articles I found was about PCs not Macs. I noted the assertion, undocumented, that an estimated one million computers are infected by root kit manipulations, and underscored that the kits can be used by people with low computer skills.
    5. MacAfee lists root kits (by description, not name) as a top pop five threat prediction in the coming year, though again, the emphasis is on PCs.
    Linc, I am trying to show a spectrum of observations and info that have shaped my thinking. To retrieve better captured evidence requires significant legwork at this time, but it is something I am willing to do if you can be patient. Understand this long attack has been like a natural disaster to us.
    I have not linked a few articles of interest because I forget if that's allowed. If so, I'd be glad to.
    After reviewing this partial answer, you may form another hypothesis. If so, please share it. I am comfortable with my position but not clinging to it.
    Thanks for your interest. Looking forward to your thoughts.
    Oh, yeah: some material is out for analysis, so we should have credible opinions pretty soon. Not positive exactly when.

  • Removing warning for "Default CSS file not found."

    I am getting this warning in Flash Builder:
    "Default CSS file not found."
    I can not for the life of me figure out how to get rid of it. Any suggestions?

    Default css file can be specified as a compiler option (per SWC, I think).
    See http://livedocs.adobe.com/flex/3/html/help.html?content=compilers_14.html
    For example, flex framework has a default CSS file: C:\Program Files\Adobe\Adobe Flash Builder 4\sdks\3.5.0\frameworks\projects\framework\default.css
    Do you have some library project or a swc with default css file option turned on, but the css file is missing?

  • Drive setup suggestion for multiple users editing simultaneously?

    At work here, a city college, not a professional company or broadcast studio, so resources are limited, we often have three people editing HDV content simultaneously in Final Cut Pro.
    Keeping the content on our multiple backup servers, there's simply too much network traffic to do this smoothly.
    Instead of keeping projects locally spread across multiple machines, I would like one centralized place for everything, for the Macs to access directly over gigabit or something else.
    So, what kind of setup do you guys suggest for this?
    The machines here are two quad-core G5s (no RAID or fiber-channel right now), and a Core2Duo iMac, F400 only.
    Again, it'd need to be able to handle three HDV projects going on simultaneously without skipping due to having to seek back and forth all over the drive.
    Thanks.

    Yes, an XSan system would perfectly fit the bill for what you want to do, but an XSAN is not a cheap solution. When it is all said and done, it will cost you tens of thousands of dollars.
    The best, cheap solution would be to use Firewire drives. I would not duplicate a project onto three drives, because you will then always be trying to figure out which version is the most current. Instead, keep all of your project, capture scratch and render files on the firewire drives. Then move the drive to whichever computer you want to do the editing on.
    Properly log & capture all your footage, then archive all your project files, because Firewire hard drives will fail over time, loosing all the info on the discs. I did say this was the cheap solution. "Cheap" does have its costs…

Maybe you are looking for

  • How to Add a New Filter Value in BEx WAD to Display All Result

    Hi experts, I want to add a new filter value in BEx WAD to display all data records. The scenario is as follows: The status field can have two values: Active ("A) and Inactive ("I"). But the requirement is to have a third value to display all the dat

  • Sql function in COLDFUSION

    Does anybody can suggest me how do i write SQL FUNCTIONS in Coldfusion, i tried to run a SQL FUNCTION in a <cfquery> but that returned me an Coldfusion Error. do i need to use <cfstoredProc> for that or. any other <tag available> <cfquery name="blahb

  • Sale From SEZ Plants- Export, Domestic, Sales to SEZ customers

    Dear Folks Looking information for the followimg Scenarios for Sales :- 1. From SEZ Plant- Export- - What are the applicable duties and taxes. - Is there excise applicability. - For Customs duty is the same is recorded in CIN, I mean some separate do

  • Hi in VL06O, the item overview showing no line item in 'LIST OUTBOUND DELIV

    Hi, Hi in VL06O, the "item overview" showing no line item in 'LIST OUTBOUND DELIVary " tab. But for that particular Delivery doc in VL03, the details showing three line item. So could you please provide me with prooer suggestion Best Regards, BDP

  • How to  include waitcursor on some event

    hi all, I want to show waitcursor on button click. how to do that. thanks for help...