Few large nodes or many small nodes

Hi guys,
In general, what option is better to implement a RAC system; few large nodes or many small nodes?
Say we have a system with 4 nodes of 4 CPU and a system with 8 nodes of 2 CPU. Will there be a performance difference?
I understand there won't be a clear cut answer for this. But I'd like to learn from your experiences.
Regards,

Hi,
The worst case in terms of block transfer is 3-way, doesn't matter if you have 100 nodes a single block will be accessed at max in 3 hops. But there are other factors to consider
example if you're using FC for SAN connectivity I'd assume trying to connect 4 servers could cost more than 2 servers.
On the load let's say your load is 80 (whatever units) and equally distributed among 4 servers each servers will have 20 (units). If one goes down or shutdown to do a rolling patch then load of that will be distributed among other 3 so these will have 20 + 20/3 = 26.666. Imagine the same scenario if there was only two servers then each will have 40 and if one goes down one server has to carry the entire load. So you have to do some capacity planning interms of cpu to decide if 4 nodes better or 2 nodes better.

Similar Messages

  • PLL Library Size - Few Larger Libraries Or More Smaller Libraries

    Hi
    I am interested in people's thoughts on whether it is better to have fewer larger libraries or more smaller libraries. If you do break into smaller pll libraries, should the grouping be that some forms will not use modules in library, ie do not have to attach to all forms. For common modules that all forms require access to, do you achieve anything in having many little libraries, rather than on larger library?
    Is it correct that pll libraries are loaded into memory at run time when form is loaded and library is attached?
    What are issues to consider here
    Thanks

    Hi Till,
    My honest opinion...do not merge the libraries. Switch between them using iPhoto Library manager and leave it at that.
    A 22 gig library is way too big to run efficiently.
    Lori

  • 1 large lun or many smaller luns

    Hi,
    I'm running Oracle 10g/11g. I'm NOT using ASM (that isn't an option right now). My storage is IBM DS3500 with IBM SVC in front of it.
    My question is, is it better to have 1 large lun or many smaller luns for the database (assuming its the same number of spindles in both cases)?
    Are there any limitations with queue depth..etc. I need to worry about with the 1 large lun?
    Any info would be greatly appreciated.
    Thanks!

    Hi,
    You opened this thread on ASM forum and you are not using ASM Filesystem (???????)....what gets dificult to answer your questions.
    Well...
    First you need to consult the manual/whitepapers/technotes of filesystem that you will use to check what are the recommendations for the database using this filesystem.
    eg. using JFS2 on AIX you can enable CIO...
    Another point:
    Create large luns can be useful and can be not. All depends on the characteristics of your env.
    e.g: I believe is not good placing 2 databases with different characteristcs of access/troughput in same filesystem. One database can cause performance issue on others database if theys share same Lun.
    I particularly dislike Large Luns to an environment that will store several database .... I usually use Large Luns for large databases, and yet without sharing the area with other databases.
    My thoughts {message:id=9676881} although it is valid for ASM.
    I recommend you read it:
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#PFGRF015
    Regards,
    Levi Pereira

  • Dividing One large Image to many smaller images

    Dear Java developers,
    I have 1 Large Image, and I want to divide it
    into many smaller images, But I don't see
    any Java API to make it...
    Anybody can help me ?
    Thanks in Advance,

    I'd guess using BufferedImage and subimages thereof is faster than filtering it. Although it depends much on the implementation of the original image source, and its caching strategies. But it's pretty certain that when you are creating a BufferedImage which is appropriate for you current color model, you avoid most conversions which may be needed when rendering directly from an image source.
    Having said that, the image source and filtering way may even use more memory and cpu than the buffered image way. At least temporary. But the image source is allowed to release almost all memory associated with the image, down to retaining only the original URL.
    In simpler words:
    - With BufferedImage you can be quite sure how much memory it will need. Add up the space needed for the Raster and auxiliary data and there you are. It won't change much over time. But it's not present in JDK 1.1.
    -- Simple, predictable and modern.
    - ImageSource is pretty much opaque in how much memory it will use. However, it's interface allows dropping most resources and re-creating them on demand. Of course, you'll know what it does when you're implementing it yourself. Which I tend to do from time to time.
    -- Complex (flow control), opaque but present in JDK 1.1.
    Your mileage may vary. There would be no challenge in programming if there were no tough decisions to be made ;-)
    /kre

  • Simultaneous hash joins of the same large table with many small ones?

    Hello
    I've got a typical data warehousing scenario where a HUGE_FACT table is to be joined with numerous very small lookup/dimension tables for data enrichment. Joins with these small lookup tables are mutually independent, which means that the result of any of these joins is not needed to perform another join.
    So this is a typical scenario for a hash join: the lookup table is converted into a hashed map in RAM memory, fits there without drama cause it's small and a single pass over the HUGE_FACT suffices to get the results.
    Problem is, so far as I can see it in the query plan, these hash joins are not executed simultaneously but one after another, which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?
    Please note that the parallel execution of a single join at a time is not the matter of the question.
    Database version is 10.2.
    Thank you very much in advance for any response.

    user13176880 wrote:
    Questions:
    - is my interpretation correct that the mentioned joins are sequential, not simultaneous?Correct. But why do you think this is an issue? Because of this:
    which renders Oracle to do the full scan of the HUGE_FACT (or any intermediary enriched form of it) as many times as there are joins.That is (should not be) true. Oracle does one pass of the big table, and then sequentually joins to each of the hashmaps (of each of the smaller tables).
    If you show us the execution plan, we can be sure of this.
    - if this is the case, is there any possibility to force Oracle to perform these joins simultaneously (building more than one hashed map in memory and doing the single pass over the HUGE_FACT while looking up in all of these hashed maps for matches)? If so, how to do it?Yes there is. But again you should not need to resort to such a solution. What you can do is use subquery factoring (WITH clause) in conjunction with the MATERIALIZE hint to first construct the cartesian join of all of the smaller (dimension) tables. And then join the big table to that.

  • How to split a large PDF into many smaller PDFs

    In my wanderings, I couldn't find an answer to this question. Thus my post.
    I have a large, 20 page, pdf. I'd like to split that pdf into 10 two page pdfs. How can I do it?
    The 20 pager is a pdf of a number of account statements. The account statements varying in length from 1 to 3 pages. I'd like to split the pdf so I end up with one pdf per account.
    In advance, thank you for your help

    Hi.
    It's simple: open the PDF, go to File, Print, and in the print dialog select Copies & Pages, enter the range you want, click PDF/Save as PDF.
    Good Luck.
    MacMini G4 1.25GHz 1GB   Mac OS X (10.4.9)  

  • Creating and organizing one large document from many small "forms"

    I'm organizing a symposium and attendees submit abstracts.
    I have set up a "form" using Word to distribute to people to fill out and named all the fields, example: "Title," "Firstauthor," "Body." Etc.
    They are going to email me completed forms.
    I was hoping that it'd be easy to make a drag/drop Script so that I can just drag and drop these files to create big document that would organize the abstracts & attendees into a program.
    Word has the "catalog" creation option in the merge-manager. But it uses some sort of tab delimiting scheme for acquiring its data when it's going to be in the form of various fields and the field-names.
    At my disposal, I also have File Maker Pro, but doesn't really seem to be able to do what I want to. I don't want to manually enter information (hundreds of attendees).
    Could I make a script that would:
    tell FileMaker to open a database and create a new entry
    tell word to get from a field "FirstName" and copy to clipboard
    tell filemaker to paste from clipboard to cell "firstname" of the new entry
    ....etc. with the other the fields .... and so on? to create the database in file maker. and then in Word, the merge-manager has a user-friendly interface to merge data from a FileMaker database and use the create a catolog feature. Is this too convoluted?
    Any suggestions on the best route? any ideas? I don't think what I'm trying to do is all that unusual.
    I've never written an applescript, but I have used them and I read about the language. I am generally a quick learner .... I just need to be pointed into the best plan of attack or know what the capabilities are.
    Powerbook G4   Mac OS X (10.4.8)   OfficeX and File Maker Pro
    Powerbook G4   Mac OS X (10.4.8)  

    Hi Ettor,
    Firstly, if the document is password protected, then I don't know if it could be done. Tatro's documents probably aren't. The first step is to unprotect the fill-in form document with ui scripting:
    tell application "Microsoft Word" to activate
    tell application "System Events"
    tell process "Microsoft Word"
    tell menu bar 1
    tell menu "Tools"
    delay 1
    click menu item "Unprotect Document"
    end tell
    end tell
    end tell
    end tell
    The document needs to be unprotected for macros to work. from there you can run a saved macro that sets the Save Preference, Save data only for forms. I named my recorded macro "ChangeFormPref". The macro could probably save the file also, but I wanted a simple macro. To run the macro, I found this on the Internet somewhere:
    do Visual Basic "Application.Run \"ChangeFormPref\""
    At this point, when you save the document, it's saved as text with AppleScript. Here's the entire script with no error checking:
    set dp to (path to desktop as string)
    set fs to (dp & "new.txt")
    tell application "Microsoft Word" to activate
    tell application "System Events"
    tell process "Microsoft Word"
    tell menu bar 1
    tell menu "Tools"
    delay 1
    click menu item "Unprotect Document"
    end tell
    end tell
    end tell
    end tell
    tell application "Microsoft Word"
    activate
    do Visual Basic "Application.Run \"ChangeFormPref\""
    delay 1
    save front document in fs
    end tell
    The delays may not be necessary, except for the one that waits for Word to activate. Here, I just placed the new.txt file on the desktop for testing.
    Next, AppleScript could easily concatenate the files creating data for a database. I would probably use the new.txt file as a temporary file, read that file, concatenate to a main file, clear the temp file, rewrite to it with Word, etc.. It might be faster though to create all the files first with some naming convention.
    I wasn't sure if Tatro was coming back, but am glad someone may use it.
    Note that Tatro is using Word X.
    Edited: I should give a warning that if you unprotect document and protect it again you lose the data. reprotecting seems to clear the form.
    gl,

  • More smaller disks or fewer larger ones?

    Hi:
    We're in the middle of procuring disks for a new Oracle server. We plan to stripe disks to randomly distribute I/O load across disk heads. There is some debate on whether or not we should use fewer large disks vs more smaller ones. For example, the datafiles for one schema will consume 432G. In terms of performance, would it be better to stripe 6-72G disks or 12-36G disks? Are there any other pros/cons for one scenario vs the other?
    Thanks in Advance !

    Generally, the more heads you have working the better so more smaller disks would be the way to go. Additionally, with smaller drives, recovery from drive failure would be faster. Of course the major drawback to this approach is that you will need bays for the drives and if you were looking at Server attached storage, you may not have enough drive bays available.
    Another consideration would be the storage technology you are considering; for example, the HP EVA SANs require a minimum of 8 drives for a drive set so in your example you would need a mimimum of 8-72G disks whether you needed the capacity or not.

  • How to blick tree view few specific nodes

    here i got a code which show how to blink tree view node but i am confuse that how to blink few node.
    Answered by:
    Avatar of Tamer Oz
    20,185
    Points
    Top 0.5
    Tamer Oz
    Partner Joined Sep 2009
    2
    8
    17
    Tamer Oz's threads
    Show activity
    Treeview control - How to make a node blink?
    Visual Studio Languages
    .NET Framework
    >
    Visual C#
    Question
    Alert me
    Question
    Vote as helpful
    0
    Vote
    Hi,
    Is there a "elegant" way to make blink a treeview node?
    I am thinking to use a timer with the collection of nodes that I want to make the blink effect, and update the icon ...
    Friday, November 06, 2009 6:19 PM
    Reply
    |
    Quote
    |
    Report as abuse
    Avatar of Kikeman
    Kikeman
    R. BOSCH
    105 Points
    All replies
    Question
    Vote as helpful
    0
    Vote
    Hi,
    You can develop your custom control for this purpose. The logic you mentioned was correct. Here is a sample control that I developed by the logic you mentioned.
    public class BlinkingTreeView : TreeView
    private Timer t = new Timer();
    private List<TreeNode> blinkingNodes = new List<TreeNode>();
    public BlinkingTreeView()
    t.Interval = 1000;
    t.Tick += new EventHandler(t_Tick);
    bool isNodeBlinked = false;
    void t_Tick(object sender, EventArgs e)
    foreach (TreeNode tn in blinkingNodes)
    if (isNodeBlinked)
    //update Icon
    tn.Text = tn.Text.Substring(0, tn.Text.Length - 1);//to test
    isNodeBlinked = false;
    else
    //update Icon
    tn.Text = tn.Text + "*";//to test
    isNodeBlinked = true;
    public void AddBlinkNode(TreeNode n)
    blinkingNodes.Add(n);
    public void RemoveBlinkNode(TreeNode n)
    blinkingNodes.Remove(n);
    public void ClearBlinkNodes()
    blinkingNodes.Clear();
    public List<TreeNode> BlinkingNodes
    get { return blinkingNodes; }
    public int BlinkInterval
    get { return t.Interval; }
    set { t.Interval = value; }
    public void StartBlinking()
    isNodeBlinked = false;
    t.Enabled = true;
    public void StopBlinking()
    t.Enabled = false;
    just show me how to use BlinkingTreeView class. i will have tree view which will have few node and few nodes may have few child nodes. now how to achieve by this class BlinkingTreeView and show me how to blink few specific node not all. thanks

    better to come with code. first populate tree view with some dummy node this way
    Root
           Child1
                    Child1-sub1
                    Child1-sub2
           Child2
                    Child2-sub1
                    Child2-sub2
    now blink Child1-sub2 & Child2-sub1. please come with code. thanks

  • Alu 2s fails at copying many small files

    Hi,
    when I transfer big files I get nice speeds and have no problems around 80MB/S)
    But then I wanted to backup my mail directory that contains many small (few kb) email files - around 40.000
    Speed drops to like 5MB/S. I know it's supposed to drop with small files but this seems a bit extreme.
    When copying is done, the LED keeps flashing.
    When I try to copy a larger file afterwars, the speed is around 1mb/s.
    The LED keeps blinking even when no files are transferred.
    When I force unmount the drive and reconnect, many files are filled with null's, so obviously it didn't complete the writing process properly.
    I waited over 30mins but the LED keeps blinking, software unmount doesn't work in this state.
    Is this like, a known bug? It's really nerving.
    First the annoying virtual CD drive and now this.
    I'm very disappointed.

    Hi
    Try to defragment the external HDD
    You can use the internal Windows Disk Defragmenter which can be found in
    All Programs -> Accessories -> System tools

  • Query performance - A single large document VS multiple small documents

    Hi all,
    What are the performance trade offs when using a single large document VS multiple small documents ?
    I want to store xml snippets with similar structure in a container. Is there any benefit while querying, if I use a single large document to store all these snippets against adding each snippet as a different document. Would it degrade the performance when adding an xml snippet each time by modifying an existing document?
    How could we decide whether to use a single large document VS multiple small documents?
    Thanks,
    Anoop

    Hello Anoop,
    In case you wanted to get a comparison between the storage types for containers, wholedoc and node, let us know.
    What are the performance trade offs when using a
    single large document VS multiple small documents ?Depends on what is more important to you, performance when creating the container, and inserting the document(s) or performance when retrieving data.
    For querying the best option is to go with smaller documents, as node indexes would help in improving query performance.
    For inserting initial data, you can construct your large document composed of smaller xml snippets and insert the document as a whole.
    If you further want to modify this document changing its structure implies performance penalties; more indicated is to store the xml snippets as documents.
    Overall, I see no point in using a large document that will hold all of your xml snippets, so I strongly recommend going with multiple smaller documents.
    Regards,
    Andrei Costache
    Oracle Support Services

  • Large servlet or several small ones??

    I am building a servlet for a web application which is becoming quite large. I am beginning to wonder about the resources that this will use on the server.
    Does anyone have any views on wether it would be better to split the servlet into several smaller or keep one large do it all.
    cheers
    chris

    I read these question and answers, and I'm sure small servlets are a better programming way, but my question is what is faster?
    I mean one big servlet need time to load and initialize but this is only for the first time, many small servlets or a framework need time to instantiate objects at every call.
    Am I wrong?

  • Many small problems in Mavericks

    When user experience many small problem in daily work, I feel the design is failed.
    Memory management has a lot of problem, I feel my macbook is abnormal slow after I upgraded to Mavericks. Sometime, I need to wait for several sec to get what I want.
    I think this problem is extreme important for OS.
    When I want to start a probelm, click once, no response, click twice, pop-up 2 times after 5 sec.
    Then i change my way of work, clock one time, work on other task. 5 sec later, multi desktop switch the desktop to the starting program. Then, I don't know what I was doing.
    Another is sound problem, sound doesn't work sometime and need some kind of procedure to get it back.
    Please don't ask me do something to solve the problem everytime when it happen. It is OS task to work it properly.
    Whatever how many great feature you added is meaningless if user daily experience is bad.

    1. This procedure is a diagnostic test. It changes nothing, for better or worse, and therefore will not, in itself, solve your problem.
    2. If you don't already have a current backup, back up all data before doing anything else. The backup is necessary on general principle, not because of anything in the test procedure. There are ways to back up a computer that isn't fully functional. Ask if you need guidance.
    3. Below are instructions to run a UNIX shell script, a type of program. All it does is to gather information about the state of your computer. That information goes nowhere unless you choose to share it on this page. However, you should be cautious about running any kind of program (not just a shell script) at the request of a stranger on a public message board. If you have doubts, search this site for other discussions in which this procedure has been followed without any report of ill effects. If you can't satisfy yourself that the instructions are safe, don't follow them.
    Here's a summary of what you need to do, if you choose to proceed: Copy a line of text from this web page into the window of another application. Wait for the script to run. It usually takes a couple of minutes. Then paste the results, which will have been copied automatically, back into a reply on this page. The sequence is: copy, paste, wait, paste again. Details follow.
    4. You may have started the computer in "safe" mode. Preferably, these steps should be taken in “normal” mode. If the system is now in safe mode and works well enough in normal mode to run the test, restart as usual. If you can only test in safe mode, do that.
    5. If you have more than one user, and the one affected by the problem is not an administrator, then please run the test twice: once while logged in as the affected user, and once as an administrator. The results may be different. The user that is created automatically on a new computer when you start it for the first time is an administrator. If you can't log in as an administrator, test as the affected user. Most personal Macs have only one user, and in that case this section doesn’t apply.
    6. The script is a single long line, all of which must be selected. You can accomplish this easily by triple-clicking  anywhere in the line. The whole line will highlight, though you may not see all of it in your browser, and you can then copy it. If you try to select the line by dragging across the part you can see, you won't get all of it.
    Triple-click anywhere in the line of text below on this page to select it:
    PATH=/usr/bin:/bin:/usr/sbin:/sbin; clear; Fb='%s\n\t(%s)\n'; Fm='\n%s\n\n%s\n'; Fr='\nRAM details\n%s\n'; Fs='\n%s: %s\n'; Fu='user %s%%, system %s%%'; PB="/usr/libexec/PlistBuddy -c Print"; A () { [[ a -eq 0 ]]; }; M () { find -L "$d" -type f | while read f; do file -b "$f" | egrep -lq XML\|exec && echo $f; done; }; Pc () { o=`grep -v '^ *#' "$2"`; Pm "$1"; }; Pm () { [[ "$o" ]] && o=`sed '/^ *$/d; s/^ */   /' <<< "$o"` && printf "$Fm" "$1" "$o"; }; Pp () { o=`$PB "$2" | awk -F'= ' \/$3'/{print $2}'`; Pm "$1"; }; Ps () { o=`echo $o`; [[ ! "$o" =~ ^0?$ ]] && printf "$Fs" "$1" "$o"; }; R () { o=; [[ r -eq 0 ]]; }; SP () { system_profiler SP${1}DataType; }; id | grep -qw '80(admin)'; a=$?; A && sudo true; r=$?; t=`date +%s`; clear; { A || echo $'No admin access\n'; A && ! R && echo $'No root access\n'; SP Software | sed '8!d;s/^ *//'; o=`SP Hardware | awk '/Mem/{print $2}'`; o=$((o<4?o:0)); Ps "Total RAM (GB)"; o=`SP Memory | sed '1,5d; /[my].*:/d'`; [[ "$o" =~ s:\ [^O]|x([^08]||0[^2]8[^0]) ]] && printf "$Fr" "$o"; o=`SP Diagnostics | sed '5,6!d'`; [[ "$o" =~ Pass ]] || Pm "POST"; for b in Thunderbolt USB; do o=`SP $b | sed -En '1d; /:$/{s/ *:$//;x;s/\n//p;}; /^ *V.* [0N].* /{s/ 0x.... //;s/[()]//g;s/\(.*: \)\(.*\)/ \(\2\)/;H;}; /Apple|SCSM/{s/.//g;h;}'`; Pm $b; done; o=`pmset -g therm | sed 's/^.*C/C/'`; [[ "$o" =~ No\ th|pms ]] && o=; Pm "Thermal conditions"; o=`pmset -g sysload | grep -v :`; [[ "$o" =~ =\ [^GO] ]] || o=; Pm "System load advisory"; o=`nvram boot-args | awk '{$1=""; print}'`; Ps "boot-args"; d=(/ ""); D=(System User); E=; for i in 0 1; do o=`cd ${d[$i]}L*/L*/Dia* || continue; ls | while read f; do [[ "$f" =~ h$ ]] && grep -lq "^Thread c" "$f" && e=" *" || e=; awk -F_ '!/ag$/{$NF=a[split($NF,a,".")]; print $0 "'"$e"'"}' <<< "$f"; done | tail`; Pm "${D[$i]} diagnostics"; done; [[ "$o" =~ \*$ ]] && printf $'\n* Code injection\n'; o=`syslog -F bsd -k Sender kernel -k Message CReq 'GPU |hfs: Ru|I/O e|last value [1-9]|n Cause: -|NVDA\(|pagin|SATA W|ssert|timed? ?o' | tail -n25 | awk '/:/{$4=""; $5=""};1'`; Pm "Kernel messages"; o=`df -m / | awk 'NR==2 {print $4}'`; o=$((o<5120?o:0)); Ps "Free space (MiB)"; o=$(($(vm_stat | awk '/eo/{sub("\\.",""); print $2}')/256)); o=$((o>=1024?o:0)); Ps "Pageouts (MiB)"; s=( `sar -u 1 10 | sed '$!d'` ); [[ s[4] -lt 85 ]] && o=`printf "$Fu" ${s[1]} ${s[3]}` || o=; Ps "Total CPU usage" && { s=(`ps acrx -o comm,ruid,%cpu | sed '2!d'`); o=${s[2]}%; Ps "CPU usage by process \"$s\" with UID ${s[1]}"; }; s=(`top -R -l1 -n1 -o prt -stats command,uid,prt | sed '$!d'`); s[2]=${s[2]%[+-]}; o=$((s[2]>=25000?s[2]:0)); Ps "Mach ports used by process \"$s\" with UID ${s[1]}"; o=`kextstat -kl | grep -v com\\.apple | cut -c53- | cut -d\< -f1`; Pm "Loaded extrinsic kernel extensions"; R && o=`sudo launchctl list | sed 1d | awk '!/0x|com\.(apple|openssh|vix\.cron)|org\.(amav|apac|calendarse|cups|dove|isc|ntp|post[fg]|x)/{print $3}'`; Pm "Extrinsic system jobs"; o=`launchctl list | sed 1d | awk '!/0x|com\.apple|org\.(x|openbsd)|\.[0-9]+$/{print $3}'`; Pm "Extrinsic agents"; o=`for d in {/,}L*/Lau*; do M; done | grep -v com\.apple\.CSConfig | while read f; do ID=$($PB\ :Label "$f") || ID="No job label"; printf "$Fb" "$f" "$ID"; done`; Pm "launchd items"; o=`for d in /{S*/,}L*/Star*; do M; done`; Pm "Startup items"; o=`find -L /S*/L*/E* {/,}L*/{A*d,Compon,Ex,In,Keyb,Mail/B,P*P,Qu*T,Scripti,Servi,Spo}* -type d -name Contents -prune | while read d; do ID=$($PB\ :CFBundleIdentifier "$d/Info.plist") || ID="No bundle ID"; [[ "$ID" =~ ^com\.apple\.[^x]|Accusys|ArcMSR|ATTO|HDPro|HighPoint|driver\.stex|hp-fax|\.hpio|JMicron|microsoft\.MDI|print|SoftRAID ]] || printf "$Fb" "${d%/Contents}" "$ID"; done`; Pm "Extrinsic loadable bundles"; o=`find -L /u*/{,*/}lib -type f | while read f; do file -b "$f" | grep -qw shared && ! codesign -v "$f" && echo $f; done`; Pm "Unsigned shared libraries"; o=`for e in DYLD_INSERT_LIBRARIES DYLD_LIBRARY_PATH; do launchctl getenv $e; done`; Pm "Environment"; o=`find -L {,/u*/lo*}/e*/periodic -type f -mtime -10d`; Pm "Modified periodic scripts"; o=`scutil --proxy | grep Prox`; Pm "Proxies"; o=`scutil --dns | awk '/r\[0\] /{if ($NF !~ /^1(0|72\.(1[6-9]|2[0-9]|3[0-1])|92\.168)\./) print $NF; exit}'`; Ps "DNS"; R && o=`sudo profiles -P | grep : | wc -l`; Ps "Profiles"; f=auto_master; [[ `md5 -q /etc/$f` =~ ^b166 ]] || Pc $f /etc/$f; for f in fstab sysctl.conf crontab launchd.conf; do Pc $f /etc/$f; done; Pc "hosts" <(grep -v 'host *$' /etc/hosts); Pc "User launchd" ~/.launchd*; R && Pc "Root crontab" <(sudo crontab -l); Pc "User crontab" <(crontab -l); R && o=`sudo defaults read com.apple.loginwindow LoginHook`; Pm "Login hook"; Pp "Global login items" /L*/P*/loginw* Path; Pp "User login items" L*/P*/*loginit* Name; Pp "Safari extensions" L*/Saf*/*/E*.plist Bundle | sed -E 's/(\..*$|-[1-9])//g'; o=`find ~ $TMPDIR.. \( -flags +sappnd,schg,uappnd,uchg -o ! -user $UID -o ! -perm -600 \) | wc -l`; Ps "Restricted user files"; cd; o=`SP Fonts | egrep "Valid: N|Duplicate: Y" | wc -l`; Ps "Font problems"; o=`find L*/{Con,Pref}* -type f ! -size 0 -name *.plist | while read f; do plutil -s "$f" >&- || echo $f; done`; Pm "Bad plists"; d=(Desktop L*/Keyc*); n=(20 7); for i in 0 1; do o=`find "${d[$i]}" -type f -maxdepth 1 | wc -l`; o=$((o<=n[$i]?0:o)); Ps "${d[$i]##*/} file count"; done; o=$((`date +%s`-t)); Ps "Elapsed time (s)"; } 2>/dev/null | pbcopy; exit 2>&-
    Copy the selected text to the Clipboard by pressing the key combination command-C.
    7. Launch the built-in Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    When you launch Terminal, a text window will open with a line already in it, ending either in a dollar sign ($) or a percent sign (%). If you get the percent sign, enter
    exec bash
    in the window and press return. You should then get a new line ending in a dollar sign.
    Click anywhere in the Terminal window and paste (command-V). The text you pasted should vanish immediately. If it doesn't, press the return key.
    If you're logged in as an administrator, you'll be prompted for your login password. Nothing will be displayed when you type it. You will not see the usual dots in place of typed characters. Make sure caps lock is off. Type carefully and then press return. You may get a one-time warning to be careful. If you make three failed attempts to enter the password, the test will run anyway, but it will produce less information. In most cases, the difference is not important. If you don't know your password, or if you prefer not to enter it, just press return three times at the password prompt.
    If you're not logged in as an administrator, you won't be prompted for a password. The test will still run. It just won't do anything that requires administrator privileges.
    The test may take a few minutes to run, depending on how many files you have and the speed of the computer. A computer that's abnormally slow may take longer to run the test. While it's running, there will be nothing in the Terminal window and no indication of progress. Wait for the line "[Process completed]" to appear. If you don't see it within half an hour or so, the test probably won't complete in a reasonable time. In that case, close the Terminal window and report your results. No harm will be done.
    8. When the test is complete, quit Terminal. The results will have been copied to the Clipboard automatically. They are not shown in the Terminal window. Please don't copy anything from there. All you have to do is start a reply to this comment and then paste by pressing command-V again.
    If any private information, such as your name or email address, appears in the results, anonymize it before posting. Usually that won't be necessary.
    When you post the results, you might see the message, "You have included content in your post that is not permitted." It means that the forum software has misidentified something in the post as a violation of the rules. If it happens, please post the test results on Pastebin, then post a link here to the page you created.
    Note: This is a public forum, and others may give you advice based on the results of the test. They speak only for themselves, and I don't necessarily agree with them.
    Copyright © 2014 Linc Davis. As the sole author of this work, I reserve all rights to it except as provided in the Terms of Use of Apple Support Communities ("ASC"). Readers of ASC may copy it for their own personal use. Neither the whole nor any part may be redistributed.

  • Pros and cons between the large log buffer and small log buffer?

    pros and cons between the large log buffer and small log buffer?
    Many people suggest that small log buffer (1-3MB) is better because we can avoid the waiting events from users. But I think that we can also have advantage with the bigger on...it's because we can reduce the redo log file I/O...
    What is the optimal size of the log buffer? should I consider OLTP vs DSS as well?

    Hi,
    It's interesting to note that some very large shops find that a > 10m log buffer provides better throughput. Also, check-out this new world-record benchmark, with a 60m log_buffer. The TPC notes that they chose it based on the cpu_count:
    log_buffer = 67108864 # 1048576x cpuhttp://www.dba-oracle.com/t_tpc_ibm_oracle_benchmark_terabyte.htm

  • TS4436 Seems to me this is a bunch of self serving crap. I have many small cameras that don't do this, in the same specs.

    Seems to me this is a bunch of self serving crap. I have many small cameras that don't do this, in the same specs.

    gdgmacguy,,
             Let me help you a bit with the english: "As a fellow user here I can tell you that no one cares what you are saying".
    Please let me know if the translation is not correct.
    The camera *****. I'm not holding it wrong.

Maybe you are looking for

  • Authroisation group in posting periods

    Dear Sapians, Kindly help me in this issue I have an authorisation group in open and closed posting periods(OB52), so that i am maintaining posting periods in INterval 1 as from 09 to 09 which is applicable for  authorisation grouo users. in inerval

  • How to add multi-line text in comments field of song tags in iTunes 12?

    Recently upgraded to iTunes 12 and can no longer enter multi-line text in the comments field of a song tag.  Used to use option+return (alt+enter) to go to a new line (carriage return).  Can it still be done?  Any help in this regard would be appreci

  • Unable to start the process chain

    Hi , I am trying to start a process chain manually , but it gives me a message saying process has no predecesssor ? what should be done to start the chain?

  • How to connect direct to LMS3.2 database

    How can I connect Ciscoworks server that hosted on sun solaris server and my client is windows Vista

  • Finder not showing icon thumbnail for some excel documents

    I have two MS Excel documents that are, except for the data contained within, virtually identical. One was created as an offshoot of the other. In the Finder, the original displays a thumbnail for the icon, but the offshoot copy displays the standard