Is this practical?

We want to create a java application that'll generate digital certificates for our clients. The Certificates will be signed by us, this way we could know we could trust ourselves rather than a third-party CA. The idea came to implement this when we realized that we needed a way to verify the identity of our clients without asking them to purchase, or ourselves to purchase for them, digital certificates (we are a non-profit organization, as too are our clients).
I figured this wouldn't be an overly difficult task, but now I'm rethinking that assumption. I've had a look at the keytool GUI program (found here: http://homepage.ntlworld.com/wayne_grant/keytool.html ) including the source code. This seems like an overly daunting task to me know.
I just want to hear other people's opinions on the situation. Is it practical to build a certificate generator from scratch? Are there any alternatives that fit our needs? I was testing the keytool gui and it has elements of what we need, but there's no ability for us to sign the certificates, nor could I find any means to creating certificates (or if they were created, they were not saved anywhere practical).
Thank you for your time, all comments appreciated.
RS

Here's my favorite link for "How to be your own Certificate Authority". Note that it depnds on a mix of Java and OpenSSL - but it explains the process pretty well:
http://www.devx.com/Java/Article/10185/0/page/1
Not quite what you're asking for - but it may do the job for you.
Grant

Similar Messages

  • How poor is this practice?

    We have applications at work that spend most of the day rolling back insert statements because of attempts to insert dupe data into a unique indexed table.
    How poor is this practice to eliminate duplicates?
    It seems to be a common practice by those use to working with SQL Server.
    Why is it any different in SQL Server than Oracle?

    >
    It is evident the developer has chosen the lazy way to deal with the dupes by introducing a unique index that uses a natural key but for whatever reason he sees this natural key repeat over and over thru out the day.
    His code uses insert all into so that he can insert in batches and probably doesn't even realize he loses his entire batch whenever there is a dupe on insert.
    >
    Well you need to modify the process as suggested by me and others.
    IMHO the simplest way is to use the MERGE statement and the NOT MATCHED condition. That allows you to insert the new rows while ignoring the matched rows.
    See MERGE in the SQL Language doc
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
    >
    MERGE Purpose
    Use the MERGE statement to select rows from one or more sources for update
    >
    Here is an Oracle-base example
    http://www.oracle-base.com/articles/10g/merge-enhancements-10g.php
    Optional Clauses
    The MATCHED and NOT MATCHED clauses are now optional making all of the following examples valid.
    -- Both clauses present.
    MERGE INTO test1 a
      USING all_objects b
        ON (a.object_id = b.object_id)
      WHEN MATCHED THEN
        UPDATE SET a.status = b.status
      WHEN NOT MATCHED THEN
        INSERT (object_id, status)
        VALUES (b.object_id, b.status);
    -- No matched clause, insert only.
    MERGE INTO test1 a
      USING all_objects b
        ON (a.object_id = b.object_id)
      WHEN NOT MATCHED THEN
        INSERT (object_id, status)
        VALUES (b.object_id, b.status);
    -- No not-matched clause, update only.
    MERGE INTO test1 a
      USING all_objects b
        ON (a.object_id = b.object_id)
      WHEN MATCHED THEN
        UPDATE SET a.status = b.status;The second example above will 'ignore' the duplicates.
    My second choice would be to use DML error logging.

  • I'm trying to draw a class schedule using JTable, is this practical?

    I'm developing a system that has to draw a schedule on screen. The very first question I ask myself is whether this is practical or not, but I can't think of any components else.
    The thing is I don't know that:
    1. If I use the header to indicate the time (e.g. 9:00-10:30; 10:30-12:00; 12:00-13:30; 15:00-16:30), can I have another vertical "header" to indicate the day of the week?
    2. Many times an event in the schedule finishes before the time slot does, e.g. a Java Programming class could be from 9:00 - 11:30 on Monday, which means if I just add it on the first two time slots on Monday, it's supposed to mean that's from 9:00 - 12:00.
    3. If I modify the time slot to be shorter, e.g. 9:00-9:30; 9:30-10:00; 10:00-10:30; 10:30-11:00 ... 16:00-16:30, the table will look too funny to put into real use.
    4. If this were in Microsoft Excel, I would use three cells with each cell representing a period of 30 minutes, and the merged column header, is it possible to do the same thing with Java?
    Thank you very much in advance for any replies!

    In the future, Swing related questions should be posted in the Swing forum.
    Maybe one of the example here will give you some ideas:
    http://www.crionics.com/products/opensource/faq/swing_ex/JTableExamples1.html
    can I have another vertical "header"Its called a row header and you add a component to the scroll pane to represent your days of the week. This is done by using the setRowHeaderView(...) method. Search the Swing forum for examples using this method (I've posted several).

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Changing Plan Mid-month "pro-rated" unclear and deceptive practice. C.S. offers no help

    I recently found myself having to up my plan minutes for a family emergency. I went online and checked my usage I was at 394 minutes out of a 450 minute plan, so I elected to switch to the 900 plan minute for the month, ancipating I would go over in the next week. I was given options to back date it  or start on that day, as I had clearly not gone over my alloted minutes, I chose to start it as of that day.
    My Bill came and I received 69 dollars in minute overages. How was this explained, that by prorating the bill, they divide the cylcel in half, not only by "dollars" which seemed to be the explantion I received online, but by minutes. SO my minute alotment of 450 was reduced to 240 for those two weeks I oriignally had 450 minutes in. They billed as if that was my allowance therefore saying I now went over my plan by 154 minute which I was billed to the tune of 69.00 dollars, plus the addition of the monthly increase in plan.
    My question to any logical rational intelligent person would be, why would someone choose to up there minute plan, before they went over, just to then be charged for minutes they would not have been had they not upped there plan to the next level. This defies common sense and any explanation would as well.
    Customer service sadly would not even hear or understand this, only to say this is a valid "cost" and I should have read the "disclosure" statement or called if I had questions and not use online services. I asked both the operator and a supervisor for the terms of the disclosure agrement and neither could tell me what it said. However in reading  billing question answers they respond "For example, if you changed rate plans during the middle of your billing cycle, your statement should contain a charge for the old rate plan (according to the number of days in your cycle that you were on the old rate plan) and the new rate plan. " yes that makes sense but no where does it say your minutes may be reduced up until this point and you may occur additional charges not already used. by ironically "increasing your minute plan".
    I am waiting for a manager to return my call,  As a long time customer of verizon whom have promoted them and the new iphone all over facebook and such to my friends I am very discouraged by this practice, and the lack of resolve.

    Mike I appreciate that at least you are familiar with what the disclosure statement says, despite othert reps not being able to recite it. However the term "may" be in inself is Misleading, if "airtime" is  prorated the statement should read "YOUR airtime MINUTES will be prorated and reduced to this point, possibly resulting in overages and additional charges" The wording is intentionally misleading. If it weren't MIke, why would anyone KNowingly select an option that would then cause them to  owe an additional 69.00 dollars they didnt owe at the time, when UPPING their PLAN?
    The fact that Customer Service Can't see that as a very real possiblity and just is dissmisive saying You should have read the disclosure statement., and it is a valid charge tells, me they run into this frequently and they have a script to read from to cover themselves, leaving the HUman Element out of the equation. It is sad enough I had to Up my plan only temporarily while dealing with a tragic family situation, to add insult to injury. To me the term "may" be prorated and "may" result in additional overages meant, that if I had gone over my minutes prior to the change I would be charged, because MIKE that is COMMON SENSE. and a REASONABLE person May think that. Cutomer Service Reps could at least acknowledge that as a very real Understanding of how that line could be interupted, and that a reasonable person would not otherwise increase their plan only to be charge for something they would not have otherwise been charged.
    I am almost positive in the Future this wording will be Changed after the continued Myriad of complaints. From what I already understand the new "changing minutes and not plan" option already has changed it. SO why not make things right for long time loyal customers now. I actually want to reduce my plan back down now as it was a temporary emergency however I DONT dare do it mid month again, SO I will wait another 2 weeks and pay even more.
    This is NOT ME and my misunderstanding, just googling the topic comes up with tons and tons of complaints, articles written on the topic and so fourth. I have been waiting days for someone as promised to call me back as well and that hasnt happened. I have talked to so many friends about this many verizon customers and all say this can't be right, because it DOESNT MAKE SENSE, despite it being "valid" as you say.

  • Any ideas on how to do a local mirror for this situation?

    I'm starting a project to allow ArchLinux to be used on a Cluster environment (autoinstallation of nodes and such). I'm going to implement this where I'm working right now (~25 node cluster). Currently they're using RocksClusters.
    The problem is that the connection to internet from work is generally really bad during the day. There's a HTTP proxy in the middle. The other day I tried installing archlinux using the FTP image and I took more than 5 hours just to do an upgrade + installing subversion and other packages, right after an FTP installation (which wasn't fast either).
    The idea is that the frontend (the main node of the cluster) would hold a local mirror of packages so that when nodes install use that mirror (the frontend would use this also, because of the bad speed).
    As I think it should be better to only update the mirror and perform an upgrade not very often (if something breaks I would leave users stranded until I fix it), I thought I should download a snapshot of extra/ and current/ only once. But the best speed I get from rsync (even at night, where an HTTP transfer from kernel.org goes at 200KB/s) is ~13KB/s this would take days (and when it's done I would have to resync because of any newer package that could have been released in the meantime).
    I could download extra/ and current/ at home (I have 250KB/s downstream but I get like ~100KB/s from rsync), record several CDs (6!... ~(3GB + 700MB)/700MB) but that's not very nice. I think that maybe this would be just for the first time. Afterwards an rsync would take a lot less, but I don't know how much less.
    Obiously I could speed things a little If I download the full ISO and rsync current using that as a base. But for extra/ I don't have a ISOs.
    I think this is a little impractical (to download everything) as I wouldn't need whole extra/ anyways. But it's hard to know all packages needed and their dependencies to download only those.
    So... I would like to know if anyone has any ideas on how to make this practical. I wouldn't wan't my whole project to crumble because of this detail.
    It's annoying because using pacman at home, always works at max speed.
    BTW, I've read that HOWTO that explains how to mount pacman's cache on the nodes to have a shared cache. But I'm not very sure if that's a good option. Anyway, that would imply to download everything at work, which would take years.

    V01D wrote:After installation the packages that are in cache are the ones from current. All the stuff from extra/ won't be there until I install something from there.
    Anyway, if I installl from a full CD I get old packages which I have to pacman -Syu after installation (that takes long time).
    Oh, so that's how is it.
    V01D wrote:
    I think I'm going to try out this:
    * rsync at home (already got current last night)
    * burn a DVD
    * go to work and then update the packages on DVD using rsync again (this should be fast, if I don't wait long time after recording it)
    And to optimize further rsync's:
    * Do a first install on all nodes an try it out for a few days (so I install all packages needed)
    * Construct a list of packages used by all nodes and frontend
    * Remove them from my mirror
    * Do further rsync updates only updating the files I already have
    This would be the manual approach of the shared cache idea I think.
    Hmm... but why do you want to use rsync? You'll need to download the whole repo, which is quite large (current + extra + testing + community > 5.1GB, extra is the largest). I suggest you to download only those packages and their dependencies that you use.
    I have similar situation. At work I have unlimited traffic (48kbps at day and 128kbps at night), at home - fast connection (up to 256kbps) but I pay for every megabyte (a little, but after 100-500 megabytes it becomes very noticeable). So I do
    yes | pacman -Syuw
    or
    yes | pacman -Syw pkg1 pkg2 ... pkgN
    at work (especially when packages are big), then put new downloaded files on my flash drive, then put them into /var/cache/pacman/pkg/ at home, and then I only need to do pacman -Sy before installing which takes less than a minute.
    I have 1GB flashdrive so I can always keep the whole cache on it. Synchronizing work cache <-> flash drive <-> home cache is very easy.
    P.S.: Recently I decided to make complete mirror of all i686 packages from archlinux.org with rsync. Not for myself but for my friends that wanted to install Linux. Anyway I don't pay for every megabyte at my work. However it took almost a week to download 5.1 GB of packages.
    IMHO for most local mirror solutions using rsync is overkill. How many users are there that use more than 30% of packages from repos? So why to make full mirror with rsync when you can cache only installed packages?

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for storing price of an item in database ?

    In the UK we call sales tax, VAT, which is currently 17.5%
    I store the ex-VAT price in the database
    I store the current VAT rate for the UK as an application variable (VAT rate is set to change here in the UK in January)
    Whenever the website display the price of an item (which includes VAT), it takes the ex-VAT price and adds the VAT dynamically.
    I have a section in the website called 'Personal Shopper' which will happily search for goods in a fixed priced range eg. one link is under £20, another is £20-£50
    This means my search query has to perform the VAT calculation for each item. Is this practice normal, or is it better to have a database column that stores the price including VAT ?

    I'm also based in the UK, and this is what we do:
    In our Products table, we store a Product Price excluding VAT and a VAT rate ID, which is joined off to a VAT Rates table. Therefore in order to calculate selling price yes, this is done at the SQL level when querying back data. To store the net, vat and gross amounts would be to effectively duplicate data, hence is evil. It also means that come January we only have to update that one row in one table, and the whole site is fixed.
    However.
    When someone places an order, we store the product id, net amount, vat code id, vat amount and vat percentage. That way there's never any issue with changing VAT codes in your VAT codes table, as that'll only affect live prices being shown on your website. For ever more whenever pulling back old order data you have the net amount, vat amount and vat percentage all hard-coded in your orders line to avoid any confusion.
    I've even seen TAS Books get confused after a VAT change where in some places on an order it recalculates from live data and in others displays stored data, and there have been discrepancies.
    I've seen many people have issues with tax changes before, and as database space is so cheap I'd always just store it against an order as a point-in-time snapshot.
    O.

  • View/Controller best practices

    Hello,
    A coworker and I are in charge of creating the standards for Web Dynpro development at our company.  We've been able to agree on most topics, but we're stuck on one issue.  Should ALL logic be in the controller or is it ok to perform some logic in the views?
    My coworker makes some valid points - we work in a team environment and you can't have two people working on the controller at the same time, if all your logic is in there, it's practically impossible for a group of developers to work on the same application.  In addition, there's often logic that's only applicable to a certain view.  He doesn't like the idea of a controller cluttered with logic from all the views, and doesn't see why we need to add an extra layer to execute something.  For instance, of a model is needed only for one view (say, to look up fields for a dropdown that only exists on that view), why have yet another executeSomeRFC method in the controller when we can do it in the view?
    My opinion is that Web Dynpro follows the MVC paradigm, and therefore all logic should be in the controller.  While it's true that right now a certain model or a certain piece of logic might only be needed for the one view, you never know if it will be needed somewhere else later.  In addition, to make the statement that "logic that's only needed for one view can be done in that view" leaves it open for a lot of interpretation and I think developers can start sneaking more and more code into the view because they think it's easier, when that is not what the view was created for. The exceptions, for me is logic that specifically has to do with the UI - for instance, if you select this checkbox, it will make 4 fields on the table disabled and change the label text of another field.
    We both see the other person's point of view and we can't decide where to move from here.  We're open to the opinions of other Web Dynpro experts.  What do you guys think?
    Thanks,
    Jennifer

    We have had the same discussion at my Company and came to the following conclusion and coding standard. It is preferable to keep all backend model call logic in the Component Controller, for various reasons if you ever drop a view or goto using some other type of UI interface  e.g. PDAs, with the same controller you already have the functionality there in the controller coded and tested.
    What I have seen happen before we adapted this standard with consultants  or less experienced developerrs is that they tend to copy and paste the same functionality from view to new view creating a maintenance headache down the road, even duplicating the code as they can not find the functionality or are too lazy to look for it then try coding their own. It is for this reason our company adapted the best practice of creating all backend call logic even if it is only required by a single view Dynpro View, in the component controller and calling it from the view as <b> wdThis.wdGet.doXXXFunction();</b>
    Also this helps code maintainers and new team members as they know all backend call logic is in the controller. I believe you will also find this practice recommended in the SAP Press WebDynpro Java books.
    Alex

  • Finished taking Sprinkler CLD Practice Exam

    I am planning on taking my CLD this coming week, and just finished taking this practice exam. Since I studied the car wash and ATM solutions I decided to go for the Sprinkler practie exam. The "Sprinkler CLD.zip" file is the results of 4 dedicated hours of my Saturday.
    I ran the VI analyzer on all VIs and CTLs and I'm not impressed with myself. Could somebody tell me how they think I would score?
    I looked at the solution for the Sprinkler.vi and it's clear that my approach is nothing like the solution from NI. This could be a good or a very bad thing. 
    It appears quick comments could mean alot if the graders depend heavily on the VI Analyzer.  It appears that I should have at least two comments in each VI, and not only have the documentation section filled in the VI but the same for controls.
    It's clear that I missed some wires when I resized my case select boxes.
    After finishing the exam and then looking back i see there is a possible lock out condition on initialization that would prevent the VI from reading the CSV file. I shouldn't have created a  "READ CSV" State. If i would have placed the "READ CSV FILE" inside the "Power Up Configuration" state there would be no issues. I should have restarted labview in my last hour.  If the VI starts up with the Water Pressure above 50% and No Rain then the CSV file is read and there is no problem. This would have been an obvious mistake had I restarted labview.
    I realize that I missed some of the specifications. For example if it starts raining during a sequence it is suppose to restart the sequence, not pause it.
    There are few comments in the code. I usually add many comments to my code, but this is my first time using a simple state engine.
    At work I have a large infrastructure already in place complete with error handling and task management.  I am also use to working on multiple monitors. During the test I only used one. Even if I didn’t pass this practice exam at least having a dry run outside my normal work condition was very good practice.
    I spent time practicing earlier and can build the Timer.VI in about 8 minutes. A functional global timer seems to be a common theme in the practice exams.
    Does anybody have any ideas or suggestions?
    Do you think I would have passed the CLD exam with this test?
    Comments?
    Regards,
    Attachments:
    VI Analyzer Results.zip ‏4 KB
    Sprinkler CLD.zip ‏377 KB

    There are a lot of good things in your code, you are nearly there. I haven't run your code, so this is more style and documentation comments.
    If I were you, I would concentrate on the following:
    Wire Error through all your subVI's put your subvi code in an error/no error case structure. If you had done that, you didn't need the flat sequence structure in your code.
    You haven't even wired error to the subvi's with error terminals, this will cost you points.
    Label any constants on the block diagram.
    Brief description of Algorithm on each VI block diagram.
    You could have avoided using local variables, for example Run Selector as this control is available in the cluster. So just a unbundle by name would have given you the value of that control. If you do use them, then make sure you state why (for example efficiency etc.) in a block diagram comment.
    Some subVis are missing VI documentation, this wont be taken lightly.
    Using default value when unwired (for your while loop stop) is not recommended. This was specifically discussed during a CLD preparation class I attended not so long ago.
    While icons are pretty, I wouldn't waste time trying to find glyphs for your subvi's just consistent text based icon scheme is perfectly acceptable. You can do this if you do have extra time, it wont fetch your extra points though.
    LabVIEW 2012 has sub diagram labels, you can enable this by default in Tools>>Options, adding comments in each of the cases is recommended.
    The main thing is time management and make sure you read other posts/blogs on CLD. I would also recommend quick drop, if you haven't started using this it may not be a good idea to do so now for your exam next week. But in general it is very useful and saves time.
    Hope this helps.
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Event ID 1008 The Open Procedure for service "BITS" in DLL "C:\Windows\System32\bitsperf.dll" failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.

    I keep getting the above error on all my SQL 2012 deployments (standard and enterprise) on all Windows Server 2012 Standard machines. I have already tried the following commands in administrator mode to resolve without success:
    lodctr bitsperf.dll
    lodctr /R
    Any other suggestions?
    Diane

    Hi Diane Sithoo,
    You post the same question 2 times. Please avoid this practice on Forum, I 
    have merged the same thread. Thanks for your understanding.
    According to your description, we need to verify when the error occurs, and if when the error happen, your SQL Server does not work, right ? If yes, we need to you help us to collect the detailed error log in SQL Server management Studio (SSMS). Please refer
    to the following steps for collecting the error log.
    In SSMS, expand Management, and SQL Server Logs, then
    right-click a log and click View SQL Server Log.
    If the SQL Server can run well, there are some error on Windows Server service, I recommend you post the question on the Windows Server General Forum(http://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winservergen
    ) , It is appropriate and more experts will assist you.
    In addition, about Event ID 1008, you may need to reload the
     performance library when it is not properly initialized during installation. Then you can use Windows Reliability and Performance Monitor to verify that performance counters are properly collected and displayed in a Performance Monitor graph. For
    more information, you can review the following article.
    http://technet.microsoft.com/en-us/library/cc774913(v=ws.10).aspx
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here. 
    Sofiya Li
    TechNet Community Support

  • Apple says not to carry your iphone near your body, but how are you supposed to carry it? What does everyone think about this?

    Just curious, I don't see much about this topic, but apparently the radiation levels of this phone are very high. I bet most people carry their phones in their pockets, yet no one is asking about the fact that apple says not to.  Is this practical? Is this something to be concerned about? I don't care about a phone bending vs. my health being seriously affected.

    I Suspect they are just covering their butts for possible future lawsuits. There are reports that some women who carry their (smaller) cell phones in their bra go on to develop breast rumors directly adjacent to where the phone usually sat. There are also studies that find nerve cell stimulation in the brain directly adjacent to the ear where a phone is being used. Three companies have now developed machines designed to use this effect to treat depression. There is much that is unknown about the effects of RF radiation, so phone companies logically want to avoid liability. No one really knows the risks.
    Franz Kaiser MD

  • Fixing this TRIGGER Syntax

    I am practicing Triggers in SQL Server 2012.  Please help me correct this Trigger Syntax for this practice question below.
     Build a trigger on the emp table after insert that adds a record into the emp_History table and marks IsActive column to 1
    CREATE TRIGGER trgAfterInsert ON [dbo].[emp_triggers] 
    FOR INSERT
    AS
    declare @empid int;
    declare @empname varchar(100);
           declare @isactive int;
    select @empid=i.empid from inserted i;
    select @empname=i.empname from inserted i;
    set   @isactive= 1;
    insert into emphistory
               (empid, empname) 
    values(@empid, @empname, @isactive) ;
    PRINT 'AFTER INSERT trigger fired.'

    Your trigger does not work if an insert statement inserts multiple rows into your emp_triggers table.  Never write triggers that only work correctly when 1 row is inserted, updated, or deleted by one command.  You want
    CREATE TRIGGER trgAfterInsert ON [dbo].[emp_triggers]
    FOR INSERT
    AS
    insert into emphistory
    (empid, empname, isactive)
    select empid, empname, 1 from inserted ;
    which will work correctly no matter how many rows (0, 1, or many) are inserted by one INSERT command.
    Tom

  • Best Practice: Application runs on Extend Node or Cluster Node

    Hello,
    I am working within an organization wherein the standard way of using Coherence is for all applications to run on extend nodes which connect to the cluster via a proxy service. This practice is followed even if the application is a single, dedicated JVM process (perhaps a server, perhaps a data aggregater) which can easily be co-located with the cluster (i.e. on a machine which is on the same network segment as the cluster). The primary motivation behind this practice is to protect the cluster from a poorly designed / implemented application.
    I want to challenge this standard procedure. If performance is a critical characteristic then the "proxy hop" can be eliminated by having the application code execute on a cluster node.
    Question: Is running an application on a cluster node a bad idea or a good idea?

    Hello,
    It is common to have application servers join as cluster members as well as Coherence*Extend clients. It is true that there is a bit of extra overhead when using Coherence*Extend because of the proxy server. I don't think there's a hard and fast rule that determines which is a better option. Has the performance of said application been measured using Coherence*Extend, and has it been determined that the performance (throughput, latency) is unacceptable?
    Thanks,
    Patrick

  • Changing a Cube - SAP Best Practice

    I have a situation where a Consultant we have is speaking of a SAP Best Practice but cannot provide any documentation support the claim.
    The situation is that a change has been made in BW Dev to a KF (changed the datatype).  Of course the transport fails in the BW QA system.  OSS note 125499 suggest activating the object manually. 
    To do I will need to open up the system for changes and deactivate the KF in question, then a core SAP BW table (RSDKYF) is to be modified to change the datatype.   Then upon activation of the KF, the data in the cube will be converted.
    If I delete the data in the cube, apply the transport, and then reload from PSA would this work also?  I would rather not have to open up the systems and have core BW tables being modified.  That just doesn't seem like a best practice to me. 
    Is this practice a SAP Best Practice?
    Regards,
    Kevin

    Hello Kevin,
    opening the system for manual changes is not best practice. There are only few exceptional cases where this is necessary (usually documented in SAP notes).
    "Easy" practice would be to add a new key figure instead of changing the data type. Obviously this causes some rework in depended objects but transport will work and no table conversions will be required.
    "Save" practice is to drop and reload the data. You can do it from PSA if the data is still available. Or create a backup InfoCube and use data mart interface to transfer data between the original and backup.
    Regards
    Marc
    SAP NetWeaver RIG

Maybe you are looking for