INXI a great script for systeminfo.

h2 , one of the regulars in my channel #linux-smokers-club on OFTC , has made a fork of infobash called inxi.
Great stuff for showing system info in a terminal or on IRC, don't use it in #archlinux , dont think they would like that.
Testing is done in my channel mentioned above and in #inxi at freenode.
Here's the link : http://www.inxi.org/
h2 wants us to put it in /usr/local/bin , but i cant get it to work there , so I've placed it in /usr/bin. And thats not a problem , it can sit anywhere , even in /home if you like to run it as user.
inxi -h for help
inxi -F for full output
inxi -U to update
Hope you like it
Cheers!
fff
Last edited by fff (2008-11-19 18:18:49)

After a while inxi will probably include sensors and i will suggest batterystatus
Be sure to do a inxi -U once in a while the script is under heavy development and now include a lot more features.
cheers!
-fff

Similar Messages

  • WSUS script for pending reboot possible addition - How

    Hi, I am found script for pending reboot and script work perfectly. My problem is that script generate only pending computers reboot for master wsus server not for replica servers. Can I modify this script to generate pending reboot for all replica servers on
    one place(wsus master server) or I must run this script on every replica server. This is script:
    [reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
    if (!$wsus) {
    $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
    $computerScope = new-object Microsoft.UpdateServices.Administration.ComputerTargetScope;
    $computerScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $updateScope = new-object Microsoft.UpdateServices.Administration.UpdateScope;
    $updateScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $computers = $wsus.GetComputerTargets($computerScope);
    $report = @()
    $computers | foreach-object {
    $computer = $_.FullDomainName
    $updatesForReboot = $_.GetUpdateInstallationInfoPerUpdate($updateScope)
    $updatesForReboot | foreach-object {
    $temp = "" | Select Computer,Update
    $temp.Computer = $computer
    $temp.Update = ($wsus.GetUpdate($_.UpdateId)).Title
    $report += $temp
    $report | Select "Computer","Update" | Export-Csv -Path c:\..PendingReboot.csv -Delimiter 1 -NoTypeInformation

    Modified script
    work great. I have report from all replica server and master server after new updates
    from today. I am add mail option and finaly this is what I am modify:
    [reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
    if (!$wsus) {
    $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
    $computerScope = new-object Microsoft.UpdateServices.Administration.computerTargetScope;
    $computerScope.IncludeDownstreamComputerTargets = $true
    $computerScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $updateScope = new-object Microsoft.UpdateServices.Administration.UpdateScope;
    $updateScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $computers = $wsus.GetComputerTargets($computerScope);
    $report = @()
    $computers | foreach-object {
    $computer = $_.FullDomainName
    $updatesForReboot = $_.GetUpdateInstallationInfoPerUpdate($updateScope)
    $updatesForReboot | foreach-object {
    $temp = "" | Select Computer,Update
    $temp.Computer = $computer
    $temp.Update = ($wsus.GetUpdate($_.UpdateId)).Title
    $report += $temp
    $report | Select "Computer","Update" | Export-Csv -Path c:\yourpath...PendingReboot.csv -Delimiter 1 -NoTypeInformation
    $smtpServer = "your mail server"
    $att = "c:\yourpath...PendingReboot.csv"
    $msg = new-object Net.Mail.MailMessage
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)
    $msg.From = "[email protected]"
    $msg.To.Add("[email protected]")
    $msg.Subject = "Pending Reboot"
    $msg.Body = "Your msg"
    $msg.Attachments.Add($att)
    $smtp.Send($msg)

  • Script for getting mail if database is down

    Hi Friends,
    OS Version : IBM AIX 5,2
    Oracle version : 9.2.0.7
    I am executing the following script for getting mail alert if database is down. some how the script is not working
    check_stat=`ps -ef|grep ${ORACLE_SID}|grep pmon|wc -l`;
    oracle_num=`expr $check_stat`
    if [ $oracle_num -lt 1 ]
    then
    exit 0
    fi
    # Test to see if Oracle is accepting connections
    $ORACLE_HOME/bin/sqlplus -s "/as sysdba" > /tmp/check_$ORACLE_SID.ora
    select name from v$database;
    exit
    # If not, exit and e-mail . . .
    check_stat=`cat /tmp/check_$ORACLE_SID.ora|grep -i error|wc -l`;
    oracle_num=`expr $check_stat`
    if [ $oracle_num -ne 0 ]
    then
    mailx -s "$ORACLE_SID is down!" [email protected] < /tmp/check_$ORACLE_SID.ora
    exit 16
    fi
    I am saving this as .sh file and executing at command prompt. It is just hanging, but not throwing any error.
    I would like to know if there is anything to be modified in the script or please provide me any such script. Thanks in advance

    HI there.
    I have a script I use that works really well. It sends out an email only if the database is down and also reads an ini file to process a blackout period and a priority level of the database... High priority databases are monitored every 5 minutes and Medium priority databases every hour.
    There are two scripts, the shell script and the .ini file and I have two cron entries...
    Check script:_
    #!/bin/ksh
    # check_oracle_status.sh
    # Script to check if Oracle db's are up and running.
    # Script is passed a priority field and reads check_oracle_status.ini
    # to determine which db's to check. If db is down an email is sent.
    # Priority Levels:
    # H - Checks db's with "H"igh Priority every 5 minutes (cron)
    # M - Checks db's with "M"edium Priority every hour (cron)
    # L - db's with "L"ow Priority currently not checked
    # Script Change History:
    # ======================
    # October 29th, 2009 - Initial Creation
    # Set environment
    export SCRIPTHOME=/opt/oracle/admin/scripts
    export INIFILE=$SCRIPTHOME/check_oracle_status.ini
    export PRIORITY=$1
    . $HOME/.profile
    db=`grep -i ":$PRIORITY" $INIFILE | cut -d":" -f1`
    check_database()
    sqlplus <<! > $SCRIPTHOME/check.out
    / as sysdba
    select * from dba_data_files;
    exit
    grep ORA- $SCRIPTHOME/check.out > $SCRIPTHOME/error.out
    if (( $? )); then
    echo ""
    else
    mailx -s "Oracle instance $i is currently UNAVAILABLE" +<email address>+ < $SCRIPTHOME/error.out
    fi
    for i in $db ; do
    fields=`grep $i $INIFILE | awk -F':' '{ total = total + NF }; END {print total}'`
    export ORACLE_SID=$i
    if [ $fields -gt 2 ]; then
    BLACKOUT_START=`grep -i "$ORACLE_SID" $INIFILE | cut -d":" -f3`
    BLACKOUT_END=`grep -i "$ORACLE_SID" $INIFILE | cut -d":" -f4`
    CURRENT_HOUR=`date +%H`
    CHECK_BASE=YES
    if [ $BLACKOUT_START -gt $BLACKOUT_END ]; then
    (( $CURRENT_HOUR >= $BLACKOUT_START || $CURRENT_HOUR <= $BLACKOUT_END )) && CHECK_BASE=
    else
    (( $CURRENT_HOUR >= $BLACKOUT_START && $CURRENT_HOUR <= $BLACKOUT_END )) && CHECK_BASE=
    fi
    if [ -n "$CHECK_BASE" ]; then
    check_database
    fi
    else
    check_database
    fi
    done
    rm $SCRIPTHOME/check.out $SCRIPTHOME/error.out
    INI File:_
    oracle1:L
    oracle2:M:17:08
    oracle3:M
    oracle5:M:17:08
    oracle6:H
    oracle7:M:17:08
    oracle8:M
    oracle9:M
    Where oracle1,2,3 etc is your sid
    L M and H your priority level
    17 is blackout start (5 PM)
    08 is blackout end (8 AM)
    Note: Blackout is just a start hour and an end hour and must contain both or none and my script can only process one blackout per database. I guess if you
    needed a second blackout you could add another line with different times for that sid
    Cron entries:_
    # Check Oracle Status
    # The check_oracle_status.sh script monitors "H"igh priority databases every 5 minutes
    # and "M"edium priority databases every hour
    0,5,10,15,20,25,30,35,40,45,50,55 * * * * /opt/oracle/admin/scripts/check_oracle_status.sh H > /dev/null 2>&1
    0 * * * * /opt/oracle/admin/scripts/check_oracle_status.sh M > /dev/null 2>&1
    Not sure if you require blackouts or priority levels but this setup works great at our site.
    Hope this helps.

  • Script for combining multiple documents?

    hi there,
    well I think there's been a lot of discussion going towards this topic. My concerns aren't quite lining up with the topics. We using a script for placing images, and instead of going document to document...I would like to combine all the indd documents...there's usually at least 24...and to import NOT as pdf's, just as individual pages within the one document...insert my images using the other script...and then later choosing to re-export the pages as individuals. With ascending order of page numbers in the correct sequence they were brought in as.
    thanks for the help.

    -Printing by folder only helps when I need to run one copy of the files, typically I need to print multiple copies or save them for future printing.  Also I have had errors printing via that method before because it will sometimes overload the printer queue.
    -It would be great if clients always provided print ready documents, but that is usually not the case and I have to correct it which is why I am here asking how to do this.
    And if you can not do that then you need to insert a blank page into the files with an odd number of pages.
    Right.  That is what I am looking for; something to automate inserting a blank page into files that have an odd number of pages WITHOUT knowing the documents page counts before hand and WITHOUT having to manually insert blank pages.

  • Script for Required fields acting strange on extra spaces at end of Name/Description

    Found this script on this forum, which I added to my submit button.
    var emptyFields = [];
    for (var i=0; i<this.numFields; i++) {
         var f= this.getField(this.getNthFieldName(i));
         if (f.type!="button" && f.required ) {
              if ((f.type=="text" && f.value=="") || (f.type=="checkbox" && f.value=="Off")) emptyFields.push(f.name);
    if (emptyFields.length>0) {
         app.alert("Error! You must fill in the following fields:\n" + emptyFields.join("\n"));
    Great script but the ERROR! popup was not coming up until I went into InDesign and removed extra spaces after "(g)" Description and Names.
    I am new at this, can someone let me know of any rules for naming these fields such as using underscores, or any other illegal chacters.

    Jeff,
    The SQL in the script file is below.  To be honest, I have reduced it down to a simple select from dual and it still puts extra spaces at the end of the single line.
    col ord noprint
    spool test.csv
    SELECT  ' ' ord,
      'ZONE_ORDER_NUMBER'||','||
      'ZONE_NAME'||','||
      'ZONE_TYPE'||','||
      'DESCRIPTION'||','||
      'START_DATE'||','||
      'END_DATE'
    FROM dual
    UNION ALL
    SELECT zone_name ord,
      '"'||zone_order_number||'"'||','||
      '"'||zone_name||'"'||','||
      '"'||zone_type||'"'||','||
      '"'||description||'"'||','||
      '"'||TO_CHAR(start_date,'DD-MON-YY')||'"'||','||
      '"'||TO_CHAR(end_date,'DD-MON-YY')||'"'
    FROM zones
    ORDER BY ord;
    spool off

  • Help Scripting for PDF Form List Box

    I am attempting to write JavaScript for an Adobe Acrobat X Pro form. I would like to set an action script for a List Box titled "customer_name" so that when a specific customer name is selected, a corresponding text box titled "cutomer_number" will automatically fill with the correct number. I am not very familiar with JavaScript so any help would be greatly appreciated.

    The easiest way to do this is to set the export value of each list box item to the corresponding customer number. The script for the custom number field could then simply be:
    // Set this field's value to the export value of the selected item in the list box
    event.value = getField("customer_name").valueAsString;
    Post again if you don't want to set up the list box that way for some reason since there are other ways to deal with this.

  • How to shell script for noob? or Cryptography for someone who doesn't need.

    Hi, I've seen the need of automating some tasks in the Terminal and I believe using shell scripts is my solution, although I don't really even understand how they work.
    Instead of posting a full how-to here, I'd like to ask if anyone knows about good comprehensive guide for someone who never used any programming language, yet knows how to work a bit with the Terminal?
    I will figure out the command I will need to input in the Terminal myself, by testing. Once it's figured all I need is to make a shell script out of it, and perhaps make an application out of it. (Automator? ... or more Script Editor? Or?)
    Thanks
    After seing this page...
    http://www.askdavetaylor.com/howcan_i_secure_encrypt_folders_on_my_macs_usb_flashdrive.html
    ... I believed to have found a great solution for some heavy cryptography, to protect some folder and for learning pleasure. What I wanted to do is to automate the openssl task mostly like this:
    Open my flash drive (or a certain folder) containting a disk image (uncompressed, or compressed if necessary, doesn't matter) but uncrypted. Clicking on something I will name 'Lock' for the moment will run the shell script, encrypting the said image with pre-set parameters and a password I will input when prompted by the app.
    Re-running the app will prompt me a password and simply un-encrypt the image, making it useable for me.
    That's all. Perhaps if I can do it, I'll make it prompt me what cipher and all other parameter to use, but I don't understand openssl very well yet. *I just read and understood more or less informations on that page.*
    Who knows, I'll end up with a sweet GUI for encrypting files usable by the common mortals.

    The Advanced Bash Scripting Guide is a great resource for beginners thru advanced users- http://tldp.org/LDP/abs/html/index.html

  • Power Shell Script for getting the list of members of a particular collection group

    Hi Group
    I am looking for a powershell script for the below  manual process in sccm2012. please help
    Obtain the list of “All Users Group1” collection that have been defined as a Primary User of a device, and what that Users ShortName and Device name is
    Obtain the list of user from Active Directory that have their “Title” attribute equal to “Non-Employee” (samAccountName)
    For each user that is returned from AD, determine if they are assigned as a Primary User of a Device and write the Device name to a file
    Continue to append all of the applicable Device names to the file
    End Result = List of all Devices that have Users that have their AD Attribute “Title” equal to “Non-Employee”
    thanks
    VAR

    Hi,
    The Cmdlets below should be helpful for you to write the script. You could have a look.
    Get-CMUser
    Get-CMUserDeviceAffinity
    Get-ADUser
    Best Regards,
    Joyce Li
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ALUI Gateway Not Returning Scripts for Subset of Users

    We have a problem where the ALUI gateway is not returning some .NET scripts for a subset of users. We have the ALUI 6.5 portal and our using the .NET accelerator 3.1.
    The situation is that this subset of users request one of our portal pages via https, which then reaches through our firewall to our remote server which is running the .NET portlet. The .NET page is served and returned to the users correctly and quickly, but this particular subset of users do not see the result rendered in their browsers for about 3 minutes. A view html source in the browser, as well as tools like Fiddler, show the page is indeed in the browser, but it is stuck trying to request some .NET scripts, and only displays the page when those requests timeout.
    The .NET scripts that are problems are both WebResource.axd and ScriptResource.axd, which in some cases are in our .NET portlets because of the .NET framework itself, but in other cases they are there only because of the ALUI portal itself, when it munges the .NET portlet to handle multiple server forms and validators and such. These .axd scripts are gatewayed so that the client browser requests them through the ALUI gateway, which in turn requests them through our firewall to our remote server -- which always serves these scripts correctly and quickly according to the IIS logs. The problem seems to lie in the ALUI gateway, as it is receiving these scripts correctly and quickly, but it is not returning them to this subset of users. Instead the ALUI gateway seems to be processing for about 3 minutes, and eventually returns an html error page, which of course the client never sees since it is expecting javascript, but we can capture the error page via Fiddler and its just telling us there was a timeout -- the client browser just notes that there is a javascript error.
    The really bizarre part is that this only happens for a subset of users, which amounts to about 20% of our users. There are 2 things that delineate these users that we have found so far. First, these users have email addresses that are 27 - 30 characters long, and the email address is our login id. Note that both shorter and longer email addresses are OK, so there is not some limit to email addresses like this might sound like at first. Secondly, these users have to be in a particular branch of our ldap store, which means they are replicated across to the portal in a particular group. We can move these "bad" users into another branch of our ldap store and once they are replicated to the portal then they work fine, and then if we move them back they return to not working. We cannot find any other difference in our ldap branches or in the corresponding ALUI groups, plus its only the ones in that particular branch with the email lengths in that very specific range.
    The gatewayed requests for these scripts vary by user since the PTARGS in the gatewayed request include the integer userid, but that does not seem to matter because we can have a "good" user successfully request the script with a "bad" user's id, and we can have a "bad" user fail to successfully request the script with a "good" user's id. That seems to point to maybe the authentication cookie being the differentiating factor that determines whether or not a gatewayed request for one of these script files will succeed or fail. So far we have only seen the problem with these particular .net axd scripts, but that may simply be because we don't have many, if any, other scripts or resources that need to be gatewayed since we usually put resources on our imageserver -- these being different because .NET and/or the ALUI portal puts these references in there for us whether we like it or not. Long-term we can re-architect our .NET portlets to not get have these axd scripts, although as mentioned earlier, we also see the ALUI portal put these axd scripts in our portlets as part of their munging process -- so that is not in our control completely. We do need to test if this subset of users can successfully request other gatewayed resources or not -- this is actually the first time I thought of that test case, so all I can say right now is its axd scripts that we know are problems, but it may or may not be a bigger problem.
    One last comment, as we appear to have found a work-around, but it does not make sense at all, and its not our preferred solution, so we still very much believe there is a problem elsewhere -- most likely in the ALUI gateway, but possibly somehow related to authentication that we do not understand. Our work-around that so far seems to work is to make our remote server be accessed via https instead of http -- which matches the way the client browsers call our portal (https). Again that first doesn't make sense, since this is only a problem for a small subset of users -- obviously calling our remote server via http works successfully for all other users, so its not just is a simple case that we must match protocols or it won't work. We also use http successfully for our calls to the remote server for portlets that are Java, although its possible that they don't have any gatewayed resources. But we also would just prefer to not use https for our internal calls in our own network as there is no need for the extra overhead -- and by the way our dev and qa environments do use http even for these .NET portlets and do not have the same problem. What's different in our production environment? The only things that should be different are that we have multiple portal servers and multiple remote servers that are load balanced (not sure that's the right term for how the remote servers are combined) -- and of course we have a firewall between them that does not exist in dev or qa.
    So we would very much appreciate any thoughts on this, whether you've seen something like it before, or just have some additional insight into the gateway and/or authentication process that seems to be the issue.
    Thanks, Paul Wilson

    We've ran into this problem when using the Microsoft ReportViewer control. In our case, we found that the portal gateway malformed the urls containing webresource.axd, so the browser was unable to get the correct address to the files. Note that there are usually multiple links to the axd files, they return different resources depending on the query string they get.
    To solve the problem, we ended up with a bit of a hack solution, but it works well. We extracted the resources we needed from the ReportViewer control's assembly using Reflector, and then published them on the image server. The next piece was to override the Render method of the page that hosted the control. In our custom version of Render, we parsed the html of the page, and replaced the contents of the src= elements with pt:images// links. These processed just fine in the portal's transformer, and our resources started showing up.
    Our Render looks something like the following code sample. The "HACKReportViewerControlPortalImageGatewayFix" class has all of the code to do the parsing. In this case, it is specific to the report viewer, because it has some special considerations for parsing the urls. My bet if that your code will be quite custom as well. Therefore, I've not included this piece of code. The important piece below is the invocation of MyBase.Render, which tells the page to render all of it's contents. Once that method is done, all of the HTML for the page is in the writer. The ModifyImageTags method then parses the html, doing the necessary replacements. Finally, the modified html is written to the page's writer, so it can be output following the normal .net processes. Also note that when parsing for urls to replace, don't do all of them, just look for the ones containing axd.
    (VB.NET)
    Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
              Dim fixer As New HACKReportViewerControlPortalImageGatewayFix
              MyBase.Render(fixer.GetWriter)
              writer.Write(fixer.ModifyImageTags())
    End Sub
    This works great for images. However, if you are dealing with javascripts, I'm not sure if this will work for you - as some .NET controls send different scripts depending on the browser. For example, in IE, you get more buttons on the toolbar for the ReportViewer, so you get more javascript too. When using FF, you get less buttons, and less script. We didn't have a problem with the scripts, so we haven't needed to solve this one.
    As for timing, this type of solution doesn't take much to put together. You are really just doing some string parsing and replacements. If you are a regex ninja, it's probably even easier. We had our solution working in a day or two.
    An added benefit to this solution is that you are putting less bytes through the portal's gateway, and sending that traffic to the image server instead.

  • Problem using ant 1.6.2 scripts for weblogic 8.1

    hai,
    i downloaded ant 1.6.2 and started writting ant scripts. these ran fine when i am doing jobs related to websphere application server but they are failing when i am running the same targets in weblogic.
    i know that this problem is mainly due to the old version of inbuilt ant provided by the weblogic.
    i use import,input,condition tasks in my code and when ever run the ant scripts in weblogic its saying that these targets can not be found.
    i am in great need for help, any one please help me how to make these newer version of ant scripts to work in weblogic.i have no choice but to use these tasks as the design requires using them.
    i will be greatly indebted to the replier as i am wasting lot of time for this.

    Hello,
    I could not find an easy way to get the wls version of ant to start using the tasks you mention from ant 1.6.2.
    I tried defining the tasks using taskdef and also adding them to wls ant.jar and updating the deafult.properties file but the ant complained about unexpected elements in the build.xml file where I added the new tags (e.g import)
    If your really up against it you can try adding wls tasks to ant 1.6.2, they are listed here:
    http://e-docs.bea.com/wls/docs81/toolstable/ToolsTable.html#1009580
    Although I could not located exactly which jar they are in. Good luck.
    Cheers
    Hussein Badakhchani
    www.orbism.com

  • Error for linking ecatt Script for exporting parameter

    Hi,
    We have 2 SAPGUI scripts, one is transaction CO01 to create the production order. The other is CO02 to release the production order. In the first script I get production order number as parameter 1 in the message field. I need to export this field to second script for releasing order.
    I am not able to pick the order.
    Tried to use ABAP code
    ABAP.
    DATA: z_aufnr like CAUFVD-AUFNR.
    get parameter id 'ANR' FIELD Z_AUFNR.
    ENDABAP.
    Still export parameter in log comes as empty. So if I create thrid script using ref# it errors out at second script start.
    Your help is greatly appreciated.
    Thanks,
    Gajanan

    Hi
    you don't have to use abap for this .
    in the command editor
    choose first script recording interface and double click it ,observe the screen list on right side and now select the last screend and double click field and parameterize it say give it a  name as variable1 by entering this in valin column.
    Create a parameter and by param1 name and export type and default values as variable1.
    Now for the second script create a parameter as import type and set it's value as &param1&.
    Also pass this value to second script by the same as you done for first script.
    This will work for sure.
    To know more how to parameterize follow the links,
    /people/sapna.modi/blog/2006/04/10/ecatt-scripts-creation-150-tcd-mode-part-ii
    http://www.sapecc.com/tutorials/secat_create.htm
    Please reward points.

  • Hi is there a TextCleanup script for Indeisgn CC?

    I am looking for a TextCleanup Script for Indesign CC, any help would be greatly appreciated.

    There are tons of scripts to do this.
    For example,
    http://www.kahrel.plus.com/indesign/clean_space.htmlhttp://
    Remove spurious space | Peter Kahrel
    You may look in the web:

  • Script for no sound when receiving spam

    Hi there, I'm looking for a script for Apple Mail that shuts off sound when a message that's labeled as spam comes in. I have several e-mail accounts running in Apple Mail. It happens often that while checking for new mail only spam comes in. It would be great if the notification stays silent in that case. Any ideas? TIA.

    EITHER YOUR RINGER IS NOT ON OR YOU HAVE GONE INTO SETTINGS AND TURNED SOUNDS DOWN THERE HOPE THIS HELPS

  • ECMA script for checking active workflows for an list item

    Hi i am having more than 1 workflow associated with the list if there is any workflow that is active for an item then i need to prevent starting another workflow for the same item. I am using the following code to achieve the same. Can anyone please provide
    me the ECMA object model equivalent for achieving the same.
        //Check for any active workflows for the document
            private void CheckForActiveWorkflows()
                // Parameters 'List' and 'ID' will be null for site workflows.
                if (!String.IsNullOrEmpty(Request.Params["List"]) && !String.IsNullOrEmpty(Request.Params["ID"]))
                    this.workflowList = this.Web.Lists[new Guid(Request.Params["List"])];
                    this.workflowListItem = this.workflowList.GetItemById(Convert.ToInt32(Request.Params["ID"]));
                SPWorkflowManager manager = this.Site.WorkflowManager;
                SPWorkflowCollection workflowCollection = manager.GetItemActiveWorkflows(this.workflowListItem);
                if (workflowCollection.Count > 0)
                    SPUtility.TransferToErrorPage("An workflow is already running for the document. Kindly complete it before starting a new workflow");
            }

    Hi,
    According to your post, my understanding is that you wanted to use ECMA script to check active workflows for an list item.
    You can use the Workflow web service "/_vti_bin/workflow.asmx"
    - GetWorkflowDataForItem operation in particular.
    Here is a great blog for you to take a look at:
    http://jamestsai.net/Blog/post/Using-JavaScript-to-check-SharePoint-list-item-workflow-status-via-Web-Service.aspx
    In addition, you can use
    SPServices. For more information, please refer to:
    http://sharepoint.stackexchange.com/questions/72962/is-there-a-way-to-check-if-a-workflow-is-completed-using-javascript
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • Cell Counting Script for the Biomedical Sciences

    I want to talk to you about something I envision will be of great use in the biological/medical community. This involves the creation of an Adobe PS script that will count cells manually with the click of a button. First let me tell you a bit about my research. As a medical student, I’m interested in ophthalmology and I hope to make it a career out of fighting preventable forms of blindness like glaucoma, macular degeneration and diabetic retinopathy. Here’s some background of my work in Glaucoma:
    Glaucoma is currently the second most common cause of irreversible blindness in the world. This usually presents as increased Intraocular Pressures (IOP) that damage a certain subpopulation of cells within the retina called Retinal Ganglion Cells (RGC).
    We have mice that work as great models for glaucoma. These mice are imaged before the onset of glaucoma and treated with pharmacological agents throughout their glaucomatous state that we hope will 1.) reduce IOP and 2.) spare the RGCs.
    I’m attaching an example of the images I am taking from mice and also a sketch of what I envision the Cell Counting Interface to look like.  This is something that other researchers would definitely use specially if they are doing multi channel work with fluorescent antibodies.
    Ideas for a cell counting script:
    1.     The script should ask for a file or multiple files to open.
    2.     Once image file is open,  the script will ask to highlight a working area. Multiple areas can be created within one image file but only one area can be active. There should be a button that creates a new area that is easy to expand its width and length and is also easy to rotate and move similar to the Free Transform function on PS CS3.
    3.     Cells will either be Red, Green, Blue or Yellow (overlap between red+green signals). When a specific area is toggled active and the researcher has the Red Channel toggled in preparation to count all the Red cells within the toggled active area, each Left click of the mouse will put a small solid red circle (adjustable to account for larger or smaller cell sizes) over the clicked cell on the image. This cell will be counted under the Red Channel for that particular “Active” area. There will automatically be a “Total Area toggle that, when active, will sum all the areas and will take a tally of the total number of Red, Green, Blue and Yellow cells that have been manually counted. As an example, say the researcher wants to work in an area of the image file whose cells have not yet been counted. The user clicks the button, “Create Area” and an “Area N” line is created below the button. The “Area N” line has a corresponding toggle switch that is clickable in order to activate. Simultaneously to creating the line below the button, the user will hold the Left click on the mouse, drag it to create Area N, and release when the desired size is attained. The user can then rotate and make final adjustments to make sure it is in the desired spot in the image file.
    4.     When the area is active, a thicker bright outline distinguishable from the rest of the areas will be seen. Clicks will only be allowed within an active area. If the “Total Area” toggle is selected, all areas become active, yet the script will be robust enough to keep track of which cells came from what area and add these to the Total Area numbers
    5.     If a Channel is deselected all dots of that color (Red, Green or Blue) will be invisible until the channel is selected again. 
    6.     Two  or more scripts can run at the same same time.
    7.     A print out will print the raw image plus the image with the dots, the filename, the individual area statistics (cell counts per channel) and the total Area and total cells counted (Total Red, Total Green, Total Blue, Total Yellow)
    8.     The user has to manually interact and click on spots he/she thinks are cells.  (Automated programs are not as good at picking out cells from artifacts left
    9.     Oh, Right clicking on a dot should remove the dot and subtract it from the cell counts.
    Conversion for Area calculation:
    x pixels = y microns
    Area in microns squared =  ((H pixels X y microns)/ x pixels) X ((W pixels X y microns)/ x pixels)

    Some of what you want is possible. Take a look at the Count Tool in CS3/4. It may only be available in the Extended versions. That provides a
    reasonable interface for marking items in a image like you are wanting to do with cells. Take a look at that and see what kind of manual process
    you can get worked out.
    However, to get the workflow you want (or anything close to it) would require more than a little code.

Maybe you are looking for

  • Airport Express will connect wired, but can't get it to connect wireless

    I purchased three APE units specifically to connect my iTunes library to multiple house receivers.  I went through the setup for two of them, but as soon as I unplug the ethernet cable, the wireless network won't recognize them.  I tried rebooting th

  • Dump SAPSQL_ARRAY_INSERT_DUPREC when loading a hierarchy

    Hi Guru's, I have an error when i try to load a hiearchy on my BI system. I want to load the Cust_sales hierarchy but i ahev a short dump : SAPSQL_ARRAY_INSERT_DUPREC Any idea ? Cyril

  • Warning SMS_AD_SYSTEM_DISCOVERY_AGENT

    Active Directory System Discovery Agent reported errors for 35 objects. DDRs were generated for 0 objects that had errors while reading non-critical properties. DDRs were not generated for 35 objects that had errors while reading critical properties.

  • CSV files at the KM are downloaded as xls by default instead of csv

    Hello, I have created a Filesystem Repository manager to point to a folder on the server. This folder contains csv files and I can see these files through the KM and in the access link of the document's properties as csv's. The problem is that when I

  • Detect Browser Version & Browser Name

    Hello, I have written some java code to detect browser version & name but its not working in ADF. Please help to detect browser details. public void browserDetail(HttpServletRequest request, HttpServletResponse response)throws IOException,ServletExce