Petit test sur qq tris.

Pour ceux que la "bidouille" pure intéresse, j'ai donc bidouillé sur les Tris.
But : comme d'habitude, juste me faire plaisir.
Dans l'ordre du snippet :
Un Bubble Sort (100% perso ... j'en suis pas mal content)
Le Sort 1D Array de LabVIEW
Un Quick Sort Récursif (un code pas mal non plus)
Test : Tri d'un Tableau 1D de 10.000 réels
Le <Bubble Sort>, bien que optimisé au maximum ... est le plus lent. (45.600 ms, un escargot)
Le <Quick Sort récursif> est beaucoup plus rapide ... 45ms (1000x plus rapide que le Bubble)
Mais la palme revient à la fonction de Tri LV ... 2ms. (normal, fonction écrite directement en assembleur)
Je me demande quel algorithme de Tri la fonction sort_1D de LV utilise ?
à mon avis ce serait bien un Quick Sort ... si quelqu'un à l'info et que cela n'est pas trop "top secret", ça m'intéresse.
voilou, et comme on dit, just for the fun.

"Ce sont des remarques pour t'aider"
C'est bien certain ça ... et c'est comme ça que je le prends
Petit truc me concernant .. ne pas se cabrer avec mes réponses du genre : "Bien sur que mon bench est valide ..."
L'homme est "comme ça", ça part comme un coup de fusil ... mais ça n'a rien de "profond".
Une seule itération ?
non, j'en ai fait une dizaine ... l'objectif n'était pas un bench de précision, mais juste un ordre de grandeur.
Ceci dit (et je peux me tromper), je pense que mes trois séquences chronologiques (les 3 flat séquences)
en // ne posent aucun soucis. L'important est qu'il y ait une parfaite chronologie séquentiel pour chaque mesures,
ce qui est le cas. Pour moi, le fait qu'il y ait 3 séquences en // influence très peu le résultat du bench.
S'il y en avait 3000, oui (limitations du à l'OS et au µP) ... mais pas 3. (sur un 4 coeurs à 3Ghz)
Encore une fois, c'est juste mon avis ... pas moins, mais pas plus non plus.
EDIT
pour moi, dans un Bench, ce n'est pas le résultat absolu qui est important
(très difficile de mesurer quoi que ce soit de précis sur un OS multi-tâches)
mais uniquement le comparatif.

Similar Messages

  • Problème de téléchargement de Photoshop Elément 10  version test sur Windows 7

    Le téléchargement de la version de test de Photoshop Elément 10 sur PC Windows 7 semble bien se passer et pourtant je n'en trouve aucune trace.

    Merci pour cette réponse mais j'avais fait ce contrôle avant d'envoyer mon message.
    Date: Mon, 6 Feb 2012 14:39:47 -0700
    From: [email protected]
    To: [email protected]
    Subject: Problème de téléchargement de Photoshop Elément 10  version test sur Windows 7
        Re: Problème de téléchargement de Photoshop Elément 10  version test sur Windows 7
        created by Jeff A Wright in Downloading, Installing, Setting Up - View the full discussion
    You should be able to locate the application under Start>All Programs. Vous devriez être en mesure de localiser l'application dans le menu Démarrer> Tous les programmes.
         Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4189611#4189611
         To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4189611#4189611. In the Actions box on the right, click the Stop Email Notifications link.
         Start a new discussion in Downloading, Installing, Setting Up by email or at Adobe Forums
      For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Problem en Multiple Numerci limit test sur teststand

    bonjour a tous,
    j ai développé sur Labview un simple programm avec un array ( cet array prend 3 chiffres qui sont 0, 8 et 24 )et j ai voulu appeler la fonction développée sur labview en Teststand et c est pour quoi j ai ecris un programm sur teststand et utilise " Multiple Numerci limit test "  sur steps MainSequence  ( en fait j ai utilise multiple numeric limit test parceque ma fonction Labview un array est et  comporte 3 chiffres ), et a Variables j ai declare un container sur  Locals.
    Problème: lors de l exécution  j ai reçu un error.
    cijoint j ai mis quelque foto du programm teststand
    ci dessous est mon programm Labview
    Je ne m y connais pas trop en teststand
    merci pour l aide
    Pièces jointes :
    Sequence.PNG ‏118 KB
    Sequence2.PNG ‏44 KB
    Sequence1.PNG ‏99 KB

    Bonjour,
    il ne s'agit à priori pas d'une erreur TestStand, mais LabVIEW. 
    En effet, le ring est coché avec sequential values, ce qui veut dire que tu ne sort pas 0,8,24 mais 0,1,2 Donc le test dans TestStand n'est pas bon. 
    pourquoi utiliser un container et pas un array dans TestStand?
    Rodéric L
    Certified LabVIEW Architect

  • WRT54G ver 6, MTU test fails when trying to connect to Xbox360 live

    MTU test fails when trying to connect wirelessly to Xbox360 live.  Firmware is current.   I have a WRT54G version #6, Motorola Surfboard SB5101 Cable Modem, running AOL HighSpeed Broadband through Time Warner Cable.  Changing the MTU setting does not work.  I have tried several other suggestions found on these boards but nothing seems to work.  Any help would be greatly appreciated!

    hi twhale.... If you had already tried the settings i posted ... in such case you can do one thing .... access router setup page ... under Security ... uncheck block wan request ... click save settings .... Under Applications & Gaming tab ... click port range forwarding .... forward ports 88 & 3074 for the static Ip of the X-BOX ... click save settings . Click save settings ... check X-BOX live test ... if still shows the same error ... upgrade/reflash the firmware ...

  • Tests sur Ni Vision (30 secondes pour m'aider)

    Bonjour à tous, 
    pour mon stage de fin d'études j'ai développé un programme avec LabView et NIVision. En écrivant mon mémoire, j'ai un doute sur ce que j'ai fait, mais je n'ai plus NIVision à disposition pour tester.
    Est-ce que quelqu'un pourrait simplement lancer les 3 VI en pièce jointe, et me dire s'ils retournent tous le même rendu de l'image ou pas du tout? 
    Voilà, si quelqu'un à 30 secondes pour m'aider ce serait super sympa.
    Merci d'avance.
    Pièces jointes :
    threshold.zip ‏477 KB

    Merci de ton aide, j'ai la réponse à ma question grâce à toi! 
    En fait c'était justement ma question : est-ce qu'on peut faire un seuillage avec 0 en min et -1 en max, sachant qu'on est en I16? Apparemment, d'après ta réponse, non, et c'est bien ce qu'il me semblait. 
    Dans mon 3eme VI, en fait je calcule simplement mon intervalle entre 0 et 32767, puis entre -32768 et -1. Je fais donc 2 seuillages, et j'ajoute les 2 images pour avoir le bon rendu, ce qui fonctonne parfaitement. En fait, sous Labview 8.6, je ne peux pas faire de seuillage d'une image en U16, donc je suis obligé de passer l'image et les valeurs de seuillage en I16. C'est la seule solution que j'ai trouvée. Si tu en as une plus simple, je suis preneur par contre
    Encore merci de ton aide.

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Trying to understand Root CA and Basic EFS certificates

    We recently migrated our Root CA from a Win2k3 to Win2k8R2.  The migration seems successfully, but we are still attempting to test it out.  My colleagues and I have very little knowledge of the Root CA and certificate use on the network in general.
     One of the basic tests we are trying is to make sure the new Root CA is issuing certs properly.  As part of this, we examined the "Issued Certificates" folder of the Certificate Authority.  Many of the certs listed there are issued
    to recognizable employees as the "Requester Name" and the certificate template is "Basic EFS (EFS)".  We originally assumed this was the certificate issue by our wireless when connecting.  However, our attempts to replicate and
    generate new certs has not proven that to be true.
    We have seen new certs requested and issue since the migration was complete, all from standard employees and using the "Basic EFS (EFS)" template.  However, we cannot determine what service they are using to request and be issues the cert.
     Through research and testing, I have seen that local file encryption would use and request a cert, but that is not a common service used in our company and would not account for all of the certificate requests.
    Can anyone point me in the right direction as to what standard services we could be running where almost every employees has been issued and a (Basic EFS) cert?  Wirelss, VPN, ?  None of our testing, outside of testing the local file encryption,
    has given us any results.

    Hard to say for sure what caused it initially. There are some 3rd party USB protection software suites that have been known to cause superfluous enrollments. Here is a trick to see what caused the enrollment to occur. Find one your suspect EFS certs in the
    CA database and note the request ID.
    After executing this command:
    certutil -view -Restrict RequestId=nnnn > rownnnn.txt
    You can use the following to dump the request:
    certutil rownnnn.txt > rownnnnRequest.txt
    Then look for the following attribute in rownnnnRequest.txt:
    Attribute[1]: 1.3.6.1.4.1.311.21.20 (Client Information)
      Value[1][0], Length = 4d
      Unknown Attribute type
      Client Id: = 5
      ClientIdDefaultRequest -- 5
      User: TESTDOM\User1
      Machine: test2.contoso.com
      Process:
    xxxx.exe
    Mark B. Cooper, President and Founder of PKI Solutions Inc., former Microsoft Senior Engineer and subject matter expert for Microsoft Active Directory Certificate Services (ADCS). Known as “The PKI Guy” at Microsoft for 10 years.

  • How to test the connection between DSD backend and DSD connector

    Dear all,
    I am configuring the MDSD scenario .. and i want to insure that my customization is correct
    How to test the connection between DSD Backend and DSD connector ??

    Hi Viren,
    The problem is:
    We have a new BW system. The basis people asked me to check the connection between this BW system and the R/3. There is only one client for now(100). I have logged into 100 and tried executing RSA1 to check the connection, but it gave a message box saying "You can only work in Client 000". Then I have treid logging into 000(just to make sure) and tried executing RSA1. Even here I got a message saying "The SAP BW system must not be operated in client 000". Now, I am not sure of what is wrong in clien 100. I am not sure if I have to do some settings before I access RSA1 or any other BW transaction. Could you please help me in this.
    Also, I just checked the RFC destinations in BW. There is a destination created for our R/3 and I have noticed that they have given a remote UserID and password for remote login. I tried to check the Remote UserID in SU01 but there is no userID with that name. Could this be a problem?
    Thanks,
    RPK.

  • Errors in browser when trying to show  data table information. Permissions?

    Hello,
    I have problems getting a database to show in my browser. I am using cs5.5, iis, and asp.  I have installed the driver as per directions for
    my windows 64 big operating system.  I am using a 32 bit odbc administrator to set up my dns (micorsoft access 2010).
    asp is running because it is using the a simple time script.  It shows the time. In my dreamweaver panels I see my database.  I see the fields in the database when I open up the dropdown menu.  I run the test when connecting to the database and it shows the information in the data fields. I see the recordset and it is working as it should.
    In my iis folder for the testing server I have permissions for read and write for 4 accounts - me the administrator, the users account, iis-isusrs permissions, me the user. I have done this simply to make sure all is accounted for.
    Should I change the permissions for the wwwroot for these things or should I just do it for the one particular testing website  am trying to
    get to work.   I only have these permissions on this folder and not on wwwroot properties security.
    Everything works except for when I open the page in a browser.  That is where I get a link titled administrator.  There is no information on
    this page that I am not doing.
    Any help would be appreciated.  I would abondon iis and install wamp but figure I would get the same problems.  Actually, I think I may have
    tried in the not too distance past.
    Anyone know of a good utube video that will show me how to install wamp proberly for dreamweaver.
    Joe

    Ok - so it turns out that the transports from the Integration Kit wasn't installed on the BO Enterprise server.  Once that was done I could connect 100%.  Only problem is that I still can't drop down the node for the database fileds in Crystal Reports.  I am at least able to create a new report using the MDX driver and then the node drop down works.
    Blessings

  • Problem in testing UI add-on after login method changes

    Dear All Experts/Gurus,
    I have changed login method to a new one of a UI Add-on (because I just have SDK implementation version license). Using add-on identifier generator, I generate the add-on identifier string and put on the source code :
    Public Sub GetApplication()
        Set oSboGuiApi = New SAPbouiCOM.SboGuiApi
        oSboGuiApi.Connect Command
        oSboGuiApi.AddonIdentifier = "4CC8B8ACE0273A61489738C94047855DE8768CDD37F64D4F11E82759A542BD545D5A6E4D50A39B9E9FB20FA944FF35C5B60FE779"
        Set SboApplication = oSboGuiApi.GetApplication
    End Sub
    After recompile it, I want to prove it by sending it to other friend's (work mate) PC/notebook with different license and hardware key. The error message when installing/starting the add-on is :
    Runtime error '-7200 (ffffe3e0)':
    Connection - Connection string doesn't match UI development work mode.
    If the error happened, I just guessed that the login method change is success. I also read the SAP new license mechanism states that if the add-on is developed using SDK implementation version, it will only be able to install/use in the customer network PC and so on, ....etc.
    The problem is that I am not fully sure if my guess is correct. I want to conduct some testing. I tried to request new temprary license with compatibility license is over within 2 days, but unsuccessful. Would you please to tell me if there is another way to prove/test that  the add-on will be able to start after the compatibility license for add-on expires ?
    I appreciate your answer so much. TIA
    Rgds,

    Hi Steve,
    The AddOnIdentifier you obtain by selecting Development or Implementation options while creating the AddOnIdentifier depends on the license manager.
    If you change the license manager to another machine or someone connected to a different license manager try to run your code you it will not work.
    It is quite clear in the <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/13d67f3c-0e01-0010-eba2-cab13a5979c2">License Guide</a>:
    <b>Development License</b>
    This license is required for partners developing add-ons and the only license allowing distribution. <i><u>In Development license, the license key is shared by all developers connected to the same License Service</u></i>. The License Service allocates licenses on the basis of first-come, first-served.
    <b>Implementation License</b>
    This license is required for customers intending to develop small modifications or add-ons for their own use using only the UI API.
    <i><u>Add-ons developed under the Implementation license cannot run with a different License Service other than the one that was used for their development</u></i>. The Implementation license does not allow
    distribution to third-parties.
    Regards
    Trinidad

  • Request for TDMS files (testing Matlab importing code)

    Dear all,
    I'm currently working on modifying the example 'ReadFile.m' that comes with the .dll from NI. I have got to a stage that I'm reasonably happy with, imports multiple groups with multiple channels of different sizes. Can import timestamp data, extra file properties.
    I will post up the completed code eventually but I wanted your help by sending me random formats of TDMS files, I have tested a few different formats, but mostly of my own concoctions so they may not cause errors that I need to make the code more robust.
    Any help appreciated sending TDMS file. Obviously don't send any data that you don't want the world to see.
    Many thanks in advance
    Pete
    Pete
    Systems engineer (CLAD LV2013)

    Hi Pete,
    Not sure if there's anything in particular which you were looking for, but please see attached for a quick TDMS file I put together. It's a pretty basic file with random numbers, but hope it helps.
    Attachments:
    TDMS File.zip ‏442 KB

  • Using ATGDust to test Formhandlers in MultiSite context

    Is it possible to use ATGDust to test multi-site functionalities of ATG?
    I have overridden ForgotPasswordHandler to set the message subject based on curent site name, which I get using SiteContextManager.getCurrentSiteContext().getCurrentSite().getItemDisplayName().
    From ATG Dust, I can create a Test requests and response objects and directly call FormHandler.forgotPassword(request,response) but when I do this, SiteContextManager will not be initialized by ATG and it will throw a null pointer exception. During normal working this is probably handled by one of the pipelines.
    From a ATGDust testcase, how can we make sure current site context is available inside nucleus?

    First, there's a slightly easier way to get at the site name: SiteContextManager.getCurrentSite().getItemDisplayName()
    Second, yes you can set the site context directly from your unit test code.  You'd call SiteContextManager's pushSiteContext method.  It's not a static method, so you need a reference to the SiteContextManager component.  The method takes a SiteContext as input and returns true if the push was successful.  They way you get your hands on a SiteContext object depends on what your test code is trying to do and how much of the normal ATG runtime environment is present.
    If your test class contains a number of different cases, you might want to be a good citizen and match every pushSiteContext with a call to popSiteContext.  The input to popSiteContext is the SiteContext object you pushed earlier.  There's a sanity check to see if the context at the top of the stack is the one the caller passed in.

  • MAC | stroke style stipple crashes on test movie/ publish in cs3 cs4

    stroke style stippled crashes when testing movie or trying to publish.
    I have done some tests where I've drawn a random animation of just strokes say 10 frames with a solid style. Then I've used the same animation but changed the style to Stippled (also with edits to that stroke style ie very dense) and it CRASHES when i try to play the movie.
    This is VERY frustrating for such a seemingly simple change on such a basic part of flash!!!!!!
    This problem is on cs4 and I have also check Flash cs3 at my uni and it does exactly the same when i tried the test again (I made a new clean animation just to make sure it wasnt the files falt)so its not a problem with my computor but the software!
    found out this search result with the same problem http://www.actionscript.org/forums/archive/index.php3/t-41241.html so im not the only one...
    is it just on the mac versions?
    This is very anoying as i have a project in on friday and thought this style went with what i was working with

    Like I said, it is hard to be certain.
    As far as I am aware, issues with Snow Leopard are specific to font anomalies in 10.6.7, and the fact that you cannot use the OS's printing system to produce PDFs via File > Print from Adobe applications. Do you have a link or citation for the issues you read online?
    So, reasons to suspect Adobe: running the Adobe installer triggered the problem; Adobe programs crash and they crash in Adobe code;
    Reasons to suspect Apple: it's not a single program crashing; the crashes don't seem to make sense; crashes happen for both CS4 and CS5; crashes affected non-Adobe programs (that is what you meant by "all my programs," right?).
    So, my intuition is you should try an OS reinstall.
    Though even if the root problem is the OS, Adobe probably has some blame here -- Adobe's programs should not crash. So you can certainly take it up with Adobe and hold firm to the line "your program should not crash; why is it crashing" and ultimately they should be able to tell you why (based on the crash report, and perhaps other things). But the answer may turn out to be, "because this Apple subsystem is broken. Fix it and our programs will stop crashing."
    Have you looked at the logs in Console.app? They might shed some additional light.
    Also, since you have recently reinstalled the OS, you are probably in a good position to reinstall the OS. Most people are reluctant to do so because they have a lot of data and applications, and it's a huge pain to preserve them. It sounds like you're not in that position, so doing that reinstall is relatively easy. I would therefore do it just for the peace of mind.
    You can contact Adobe Support at http://www.adobe.com/go/supportportal.

  • ClassNotFound Message when trying to deserialize data from an applet

    Hi,
    I have an applet that reads data from a web-server. The data was created by serializing a custom class (Test.java). The serialization format is not the standard one, but XML. This is reached by using the JSX-library (http://www.jsx2.com).
    My HTML-file (Test.html) looks like that:
    <html>
    <applet width="400"
    height="200"
         code="Test.class"
         codebase="./"
         archive="JSX2.0.9.4.jar">
    </applet>
    </html>
    Both, the Test.class and JSX2.0.9.4.jar are stored in the same directory. The Applet reads the serialization data of "Test" instances and tries to deserialize them. But when started, the following error message appears:
    Java VM version: 1.4.1_02
    Java VM vendor: Sun Microsystems Inc.
    java.lang.ClassNotFoundException: Test
    at java.net.URLClassLoader$1.run(URLClassLoader.java:198)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:186)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:299)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
    at JSX.ObjectReader$Cache.getClass(ObjectReader.java:1503)
    at JSX.ObjectReader$Cache.getClassCache(ObjectReader.java:1511)
    at JSX.ObjectReader.object(ObjectReader.java:796)
    at JSX.ObjectReader.readObject(ObjectReader.java:381)
    at JSX.ObjectReader.readObjectOverride(ObjectReader.java:342)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:318)
    at Test.read(Test.java:49)
    at Test.start(Test.java:72)
    at org.kde.kjas.server.KJASAppletStub$1.run(KJASAppletStub.java:102)
    at java.lang.Thread.run(Thread.java:536)
    When reading serialized data of classes included in the JRE, i.e. an ArrayList, everything works fine.
    I think it is a kind of CLASSPATH problem, because "Test.class" can not be found although it is in the same directory as the Test.html file. So how can I fix this problem?
    Thanks in advance
    Markus

    Make sure your 'CLASSPATH' contains '.'. However, it
    should because you are compiling your applets fine.
    Also try specifying a path to your class when
    loading. If it's in a package, you have to specify
    that, or include the class directory in your
    CLASSPATH.Hey Devyn, it can't be a classpath problem. It's an applet; if run in the browser, there is no way to specify a classpath short of entering a jar in the archive parameter.
    How are you deserializing the applet?

  • Compact DAQ Acquisitio​n sur plusieurs modules

    Bonjour,
    Nous devons réaliser une petit enregistreur sur la base de matériel Compact
    DAQ.
    L’enregistreur doit pouvoir enregistrer les voies de plusieurs modules
    (température (USB 9213), tension (USB 9215))...
    Dans le petit programme en pièce jointe je pense ne pas utiliser la meilleure
    méthode pour acquérir les voies et mesures.
    trois problèmes sont rencontrés
    1- une même tâche ne peut contenir l'acquisition de voies de modules
    distincts, ce qui m'oblige à créer deux tâches en acquisition, une pour les
    voies "tension" une autre pour les voies "température".
    Cela reste possible à faire mais complique le diagramme et ne facilite pas
    l'ajout ou le retrait de modules et l'acquisition de voies (diagramme impacté).
    2- dans le programme écrit la fréquence d'acquisition doit être unique pour
    l'ensemble des modules. Il suffit de mettre la fréquence une à 1Hz et la deux à
    10Hz pour constater que la fréquence pour les deux tâches sera forcée à 1 Hz la
    fréquence possible est ici limitée par le module le plus lent (thermocouple).
    Une acquisition rapide des voies "tension" n'est alors pas
    réalisable.
     3- avec la configuration pénalisante de l'acquisition de l'ensemble
    des voies thermocouple du module USB 9213 la fréquence Max est fixée à 75Hzen
    utilisant en fréquence 1 50Hz et en deux 1Hz pour tester la robustesse du
    programme on obtient rapidement L'erreur -200279:Tentative de lecture
    d'échantillons qui ne sont plus disponibles. L'échantillon demandé était
    auparavant disponible, mais il a été écrasé depuis.
    Je pense que la technique globale pour acquérir les voies des modules
    n'est certainement pas adaptée. Une aide salutaire sur ce pb nous sera dès plus
    utile...
    Merci d'avance
    CS
    Résolu !
    Accéder à la solution.
    Pièces jointes :
    ENREGISTREUR_USB_DAQ.zip ‏68 KB

    je dispose d'un chassis cDAQ-9178, CompactDAQ chassis (8 slot USB)
    la configuration du matériel est récupérable à l'adresssuivante: 
    http://ohm.ni.com/advisors/compactdaq?configid=CD1​720585&elq=c067d1165fac4266bfced875dfa57429
    Un élément de réponse pour réaliser l'acquisition multifréquences, est de modifier le mode de relecture des voies Daqmx. La solution est fonctionnelle, reste que la méthode n'est pas très souple car en cas d'ajout de voies et de modules il faut obligatoirement modifier le diagramme.
    CS
    Pièces jointes :
    ENREGISTREUR_USB_DAQ.zip ‏69 KB

Maybe you are looking for