Tuesday, October 7, 2014

query TCP UDP ports in Windows

Portqry.exe is a command-line utility that you can use to help troubleshoot TCP/IP connectivity issues

The utility reports the port status of TCP and UDP ports on a computer that you select.

Portqry.exe can query a single port, an ordered list of ports, or a sequential range of ports.

it's a great tool to have when you need to query UDP ports (that TELNET cannot do)

Portqry.exe reports the status of a TCP/IP port in one of the following three ways:
·    Listening
A process is listening on the port on the computer that you selected. Portqry.exe received a response from the port.


·     Not Listening
No process is listening on the target port on the target system.


·    Filtered
The port on the computer that you selected is being filtered. Portqry.exe did not receive a response from the port. A process may or may not be listening on the port.

Syntax:
·   -n [server]—IP address or name of server to query
·   -p [protocol]—TCP or UDP or BOTH (default is TCP)
·   -e [endpoint]—single port to query (valid range: 1-65535)
·   -r [endpoint range]—range of ports to query (start:end)
·   -o [endpoint order]—range of ports to query in an order (x,y,z)
·   -l [logfile]—name of log file to create
·   -s—"slow link delay" waits longer for UDP replies from remote systems
·   -I—bypasses default IP address-to-name lookup; ignored unless an IP address is specified after -n
·  -q—"quiet" operation runs with no output

Examples:
The following command tries to resolve "reskit.com" to an IP address and then queries TCP port 25 on the corresponding host:
portqry -n reskit.com -p tcp -e 25

The following command tries to resolve "169.254.0.11" to a host name and then queries TCP ports 143,110, and 25 (in that order) on the host that you selected. This command also creates a log file (Portqry.log) that contains a log of the command that you ran and its output.
portqry -n 169.254.0.11 -p tcp -o 143,110,25 -l portqry.log

The following command tries to resolve my_server to an IP address and then queries the specified range of UDP ports (135-139) in sequential order on the corresponding host. This command also creates a log file (my_server.txt) that contains a log of the command that you ran and its output.
portqry -n my_server -p udp -r 135:139 -l my_server.txt



Download link: http://www.microsoft.com/en-us/download/confirmation.aspx?id=17148

Sunday, October 5, 2014

check DataBase allocated and unused size

SET NOCOUNT ON ;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
-- create temporary table that will store data space used information
CREATE TABLE ##tmp_sfs
   (
    fileid INT
   ,filegroup INT
   ,totalextents INT
   ,usedextents INT
   ,name VARCHAR(1024)
   ,filename VARCHAR(1024)
   ,DBName VARCHAR(128)
   ) ;       
  
-- Declare variables
DECLARE @CMD VARCHAR(2000) ;
-- Command to gather space for each database
SET @CMD = 'DECLARE @DBName varchar(128);
SET @DBName = ''?'';
INSERT INTO ##tmp_sfs (fileid,filegroup,totalextents,
usedextents,name,filename)
EXEC (''USE ['' + @DBName + '']
DBCC SHOWFILESTATS WITH NO_INFOMSGS'');
UPDATE ##tmp_sfs SET DBName = @DBName WHERE DBName is NULL' ;
-- Run command against each database
EXEC master.sys.sp_MSforeachdb @CMD ;
SELECT   DBName
        ,[DataAllocMB]
        ,[DataUsedMB]
        ,[DataAllocMB] - [DataUsedMB] AS [DataUnUsedMB]
        ,[LOG File(s) Size (KB)] / 1024.0 AS [LogAllocMB]
INTO     #Capacity
FROM     ( SELECT instance_name AS DBName
                 ,cntr_value
                 ,counter_name
           FROM   sys.dm_os_performance_counters
           WHERE  counter_name IN ( 'Log File(s) Size (KB)' )
                  AND instance_name NOT IN ( '_Total' , 'mssqlsystemresource' )
           UNION ALL
           SELECT DBname
                 ,totalextents * 8 / 128.0 AS cntr_value
                 ,'DataAllocMB' AS counter_name
           FROM   ##tmp_sfs
           UNION ALL
           SELECT DBname
                 ,usedextents * 8 / 128.0 AS cntr_value
                 ,'DataUsedMB' AS counter_name
           FROM   ##tmp_sfs ) AS PerfCounters PIVOT ( SUM(cntr_value) FOR counter_name IN ( [LOG File(s) Size (KB)] , [DataAllocMB] , [DataUsedMB] ) ) AS pvt ;

--- Total Capacity
SELECT   @@SERVERNAME AS ServerName
        ,COUNT(DBName) AS DBCount
        ,SUM(DataAllocMB) AS Allocated_TotalData_MB
        ,SUM(DataUsedMB) AS Used_TotalData_MB
        ,SUM(DataUnUsedMB) AS UnUsed_TotalData_MB
        ,SUM(LogAllocMB) AS Allocated_TotalLog_MB
        ,SUM(DataAllocMB) + SUM(LogAllocMB) AS Tatal_MB
FROM     #Capacity

--- Total Capacity GB
SELECT   @@SERVERNAME AS ServerName
        ,COUNT(DBName) AS DBCount
        ,SUM(DataAllocMB)/1024 AS Allocated_TotalData_GB
        ,SUM(DataUsedMB)/1024 AS Used_TotalData_GB
        ,SUM(DataUnUsedMB)/1024 AS UnUsed_TotalData_GB
        ,SUM(LogAllocMB)/1024 AS Allocated_TotalLog_GB
        ,(SUM(DataAllocMB) + SUM(LogAllocMB)) / 1024 AS Tatal_GB
FROM     #Capacity
---  Capacity Per Data Base
SELECT   DBName
        ,[DataAllocMB] AS AllocatedData_MB
        ,[DataUsedMB] AS UsedData_MB
        ,[DataUnUsedMB] AS UnUsedData_MB
        ,[LogAllocMB] AS AllocatedLog_MB
FROM     #Capacity
ORDER BY [DataAllocMB] DESC

-- drop temporary table
DROP TABLE ##tmp_sfs
DROP TABLE #Capacity ;
GO

Why your SQL Log files are growing too big?

Every SQL Server database has at least two files; a data file and a transaction log file.  The data file stores user and system data while the transaction log file stores all transactions and database modifications made by those transactions.  As time passes, more and more database transactions occur and the transaction log needs to be maintained.  If your database is in the Simple recovery mode, then the transaction log is truncated of inactive transaction after the Checkpoint process occurs.  The Checkpoint process writes all modified data pages from memory to disk.  When the Checkpoint is performed, the inactive portion of the transaction log is marked as reusable.

Transaction Log Backups
If your database recovery model is set to Full or Bulk-Logged, then it is absolutely VITAL that you make transaction log backups to go along with your full backups.  SQL Server 2005 databases are set to the Full recovery model by default, so you may need to start creating log backups even if you haven't ran into problems yet.  The following query can be used to determine the recovery model of the databases on your SQL Server instance.

SELECT name, recovery_model_desc
FROM sys.databases

Be aware: Full backups are the starting point for any type of recovery process, and are critical to have in case you run into trouble.  In fact, you cannot create transactional log backups without first having created a full backup at some point.

The Full or Bulk-logged Recovery Mode
With the Full or Bulk-Logged recovery mode, inactive transactions remain in the transaction log file until after a Checkpoint is processed and a transaction log backup is made.  Note that a full backup does not remove inactive transactions from the transaction log.  The transaction log backup performs a truncation of the inactive portion of the transaction log, allowing it to be reused for future transactions.  This truncation does not shrink the file, it only allows the space in the file to be reused It is these transaction log backups that keep your transaction log file from growing too large. 

If your database recovery model is set to FULL, and you're not creating transaction log backups and never have, you may want to consider switching your recovery mode to Simple.  The Simple recovery mode should take care of most of your transaction log growth problems because the log truncation occurs after the Checkpoint process.  You'll not be able to recover your database to a point in time using Simple. To switch your recovery model to Simple mode, issue the following statement in your database:

       ALTER DATABASE YourDatabaseName

       SET RECOVERY SIMPLE

 
How to prevent / fix the problem:

 1) Convert the Recovery Model to Simple Recovery
If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file.
OR
2) Start Taking Transaction Log Backup
If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval. This way, your log would not grow beyond some limits. If you are taking an hourly T-Log backup, your T-Log would grow until one hour but after this the T-Log backup would truncate all the ‘committed’ transactions once you take it. Doing this would lead the size of the T-Log not to go down much, but it would rather be marked as empty for the next hour’s T-Log to populate.

With this method, you can restore your database at Point of Time if a disaster ever happens at your server.

Not performing transaction log backups is probably the main cause for your transaction log growing too large.  However, there are other situations that prevent inactive transactions from being removed even if you're creating regular log backups.  The following query can be used to get an idea of what might be preventing your transaction log from being truncated:

SELECT name, log_reuse_wait_desc

FROM sys.databases

Possible reasons can be:

Long-Running Active Transactions
A long running transaction can prevent transaction log truncation.  These types of transactions can range from transactions being blocked from completing to open transactions waiting for user input.  In any case, the transaction ensures that the log remain active from the start of the transaction.  The longer the transaction remains open, the larger the transaction log can grow.  To see the longest running transaction on your SQL Server instance, run the following statement:

DBCC OPENTRAN

If there are open transactions, DBCC OPENTRAN will provide a session_id (SPID) of the connection that has the transaction open.  You can pass this session_id to sp_who2 to determine which user has the connection open:

EXECUTE sp_who2 spid

Alternatively, you can run the following query to determine the user:
SELECT * FROM sys.dm_exec_sessions
WHERE session_id = spid  --from DBCC OPENTRAN

You can determine the SQL statement being executed inside the transactions a couple of different ways.  First, you can use the DBCC INPUTBUFFER() statement to return the first part of the SQL statement

DBCC INPUTBUFFER(spid)  --from DBCC OPENTRAN

Alternatively, you can use a dynamic management view included in SQL Server 2005 to return the SQL statement:

SELECT     r.session_id,     r.blocking_session_id,     s.program_name,     s.host_name,      t.text FROM     sys.dm_exec_requests r     INNER JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id     CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t WHERE     s.is_user_process = 1 AND     r.session_id = SPID  --FROM DBCC OPENTRAN



Backups

Log truncation cannot occur during a backup or restore operation.  If a database backup is keeping your log from being truncated you might consider cancelling the backup to relieve the immediate problem.

Transactional Replication
With transactional replication, the inactive portion of the transaction log is not truncated until transactions have been replicated to the distributor.  This may be due to the fact that the distributor is overloaded and having problems accepting these transactions or maybe because the Log Reader agent should be ran more often.  IF DBCC OPENTRAN indicates that your oldest active transaction is a replicated one and it has been open for a significant amount of time, this may be your problem.

Database Mirroring
Database mirroring is somewhat similar to transactional replication in that it requires that the transactions remain in the log until the record has been written to disk on the mirror server. If the mirror server instance falls behind the principal server instance, the amount of active log space will grow. In this case, you may need to stop database mirroring, take a log backup that truncates the log, apply that log backup to the mirror database and restart mirroring.

Disk Space

It is possible that you're just running out of disk space and it is causing your transaction log to error. If you cannot free enough disk space on the drive that currently contains the log file then you may need to move the file to a drive with enough space to handle the log.  If your log file is not set to grow automatically, you'll want to consider changing that or adding additional space to the file.  Another option is to create a new log file for the database on a different disk that has enough space by using the ALTER DATABASE YourDatabaseName ADD LOG FILE syntax.

Shrinking the File

Once you have identified your problem and have been able to truncate your log file,  you may need to shrink the file back to a manageable size.  You should avoid shrinking your files on a consistent basis as it can lead to fragmentation issues.  However, if you've performed a log truncation and need your log file to be smaller, you're going to need to shrink your log file.  You can do it through management studio by right clicking the database, selecting All Tasks, Shrink, then choose Database or Files.  If I am using the Management Studio interface, I generally select Files and shrink only the log file.

This can also be done using TSQL.  The following query will find the name of my log file.  I'll need this to pass to the DBCC:

SHRINKFILE command.

SELECT name

FROM sys.database_files

WHERE type_desc = 'LOG'

Once I have my log file name, I can use the DBCC command to shrink the file.  In the following command I try to shrink my log file down to 1GB.

DBCC SHRINKFILE ('SalesHistory_Log', 1000)

Also, make sure that your databases are NOT set to auto-shrink.  Databases that are shrank at continuous intervals can encounter real performance problems.


TRUNCATE_ONLY and NOLOG

If you're a DBA and have ran into one of the problems listed in this article before, you might be asking yourself why I haven't mentioned just using TRUNCATE_ONLY to truncate the log directly without creating the log backup.  The reason is that in almost all circumstances you should avoid doing it.  Doing so breaks the transaction log chain, which makes recovering to a point in time impossible.


Moving forward


You should consider adding notifications to your system to let you know when your database files are reaching a certain threshold.  The more proactive you are in terms of alerts for these types of events, the better chance you'll have to correct the issue before it turns into a real problem…

How to make Windows 8 understand you ARE the Administrator

This tweak goes well when UAC is already disabled and is assumed you have done so already.


 1.       Type secpol.msc in the Start Menu and press Enter.
2.       Double click on Local Policies then double click on Security Options.
3.       Scroll to the bottom to this entry - User Account Control: Run all administrators in Admin approval mode *
4.       Double click that line . Set it to disabled then press OK
5.       Reboot.

This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer.
*
The options are
• Enabled: (Default) Admin Approval Mode is enabled. This policy must be enabled and related UAC policy settings must also be set appropriately to allow the built-in Administrator account and all other users who are members of the Administrators group to run in Admin Approval Mode.
• Disabled: Admin Approval Mode and all related UAC policy settings are disabled. Note: If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.

But be aware!
The prompt that asks you to elevate a process with your administrator privileges is not designed to be irritating, it is designed to protect you. It's called the Least Privileged User component and it is designed to give the user a one click solution to a wide range of vulnerabilities like Blaster Worm and other viruses because everyone ran Windows XP with an unprotected administrator account.

in Windows Vista and later, all administrator accounts run as standard users unless the user (or an unknown component) initiates a process that requires administrator privileges and this is when the system will ask the logged on user for those privileges. This gives complete control back to the user and allows you to approve any administrator elevation with one click.

Remember: It is your computer, but only until you intentionally defeat the built-in security components. Then it will belong to them.

Using NSLOOKUP to query and verify MX records

1) From a command prompt type ‘nslookup’.

2) By default this will connect to the client primary DNS server.

    if you are required to change the DNS server, type the following, where x.x.x.x is the IP address of the DNS server

server x.x.x.x

3) Type ‘set type = mx’ to configure the query type to be for MX records.

5) Type in the the email domain. For Example, gmail.com

6) A list of MX records listed in the order of their preference, name server and host records details will be returned.

Solid State Drives

If a solid-state drive doesn’t have any moving parts, how does it store data?

Instead of using motors and magnets, they use electrical current to signify the 1s and 0s that all data is comprised of.
While a magnetic hard drive works by changing the magnetism of sectors on a spinning platter, solid-state drives have lots and lots of tiny transistors.
When an electrical current is able to pass through a transistor it represents a 1, and a 0 is represented when a transistor does not accept current. 
Every solid-state drive has a controller. This is a processor that is responsible for reading and writing to the transistors and passing this data to the rest of the computer system to be used.
It also manages all the data on the drive, keeping track of where data is stored and making sure that data is distributed over the transistors evenly.
Cells are grouped into pages, which are usually 4KB. This is the smallest level of data that can be written to a SSD.
Similarly, pages are grouped into blocks, which are typically made up of 128 pages (512KB).
Taking it one step further, blocks are organized in groups of 1024 to form planes, which are usually 512MB in size.
Multiple planes make up one of the flash chips you can see on an SSD circuit board.

Advantages: 

1.    Speed:
This is the biggest reason to go for an SSD; they’re much, much faster than conventional hard drives in two different ways.
First of all, because there are no moving parts there is no wait. While the drive spins up or for a head to move to the
correct place to start reading the data there is very little waiting time, or latency.
Since data can be read from anywhere in the drive pretty much instantly, tasks which rely on lots of data being read from all over the place are completed much, much faster.

2.    Defrag: added benefit us that you never have to defragment your files, which is always a nice perk.


3.    outperforming HDDs in random reads, SSDs also have the upper hand in sustained reading and writing. 
 A typical new magnetic hard drive will be able to read at 60-80 MB/s, whereas a midrange solid-state drive will be able to read at anywhere from 175-250 MB/s.

 4.    Durability:
The other major advantage that SSDs have is again due to their lack of moving Parts, making it virtually invulnerable to vibrations, magnetism
or being dropped things that would seriously damage a magnetic drive.

5.    Noise:
Hard drives make a surprising amount of noise. Having a drive that completely outperforms a conventional hard drive while making no noise at all is a definite plus.


6.    Power Consumption:
One of the big things that defines a laptop is its battery life.
most solid-state drives draw a third to a half less power than even “low power” hard drives.

7.    Temperature:
hard drives get hot when they’re in use for long periods of time.
Rotating platters 7200 times a minute isn’t exactly the best way to keep a motor cool.
SSDs, on the other hand, generate very little heat and require no cooling

 Disadvantages
  1.       Price:
unfortunately SSDs are still very much a developing market, so you still pay a massive premium for all those advantages.

2.       Reliability/Longevity:
The major problem SSDs have is with longevity. As fast as the technology allows solid-state drives to be, it limits them in that each transistor may only be written to a
certain number of times before it becomes “stuck” and cannot be written to any more.
While the SSD‟s controller helps to handle this by distributing writing between the transistors with wear levelling,
eventually the drive will start to become very unreliable very quickly.

     3.       Long Term Performance:
over time the drive becomes full of data. While this doesn’t stop you from continuing to add data to the drive until you have filled the capacity of the drive, over time the writing speed of the drive appears to become slower and slower.
Put simply, this is because as the drive’s pages are all filled it needs to rearrange data to free more pages to write to.
there are now ways to alleviate the degradation without wiping the drive, usually involving either a command called TRIM or by the controller automatically carrying out a process called garbage removal in blocks that aren’t being used.

     4.       Data Recovery:
Usually it takes a long time for HDD hard drive to fail, giving you plenty of warning and the opportunity to back up the data to another source.
You don’t get that with a solid-state drive; when a drive fails, it fails completely and instantly.
when that happens, it’s impossible to retrieve the data.

     5.       Capacity: (compared to SATA and SAS 10KRPM disks)
 
     6.       Brands:
 be wary about buying a drive that is significantly cheaper than all the other SSDs with similar capacities. There’s usually a very good reason for it!

To be aware of:

      1.       Controllers:
The controller manages the data on the solid-state drive and keeps track of where the data is kept.
It’s very important to have a decent controller or the performance of the drive is seriously impaired
possible avoid SSDs based on a JMicron controller. The main ones you’ll want to get will use either the Indilinx “Barefoot” or SandForce controllers.
Intel and Samsung also make their own SSD controllers that perform well, too.

    2.       TRIM Support:
TRIM support is extremely important when it comes to buying an SSD
TRIM is a command which can be utilized by the Operating System to tell the SSD that it has deleted files and gives it the chance to free up pages that
aren’t being used any more by rewriting the block when it’s not in use.
This means that the block doesn’t need to be rewritten while you’re trying to write to it, which is what causes the performance degradation in the first place.

MLC vs. SLC:
It’s quite simple: MLC stands for multi-level cell, whereas SLC stands for single-level cell.
This means that each transistor (or cell) in an MLC drive holds two bits of information, whereas in an SLC they only hold one.
When checking for data, an SLC drive only needs to check if the bit is a 1 or a 0. On
the other hand, each cell in an MLC drive has four states: 11, 10, 01 or 00. This process takes around 3 times longer to perform.
The result is that an SLC has faster transfer speeds, lower power consumption and
has more read-write cycles, but has half of the storage density of an MLC drive for the same number of transistors,
making it much more expensive for a drive with the same capacity.
This makes them prohibitively expensive for consumer use, but perfect for enterprise servers that need the extra performance and reliability.

IT departments considering SSDs should take both the performance and the price/performance ratios into consideration before making a choice.

For some IT operations, the additional cost might not justify the performance improvement:

This technical report provides price vs. performance results:
http://www.dell.com/downloads/global/products/pvaul/en/ssd_vs_hdd_price_and_performance_study.pdf

The objective is to provide recommendations on drive types that are best suited for some common applications.

Since I guess you don’t have the time to read this long (but very interesting) report, I will write the bottom lines:
1.       for sequential access workloads, such as media streaming applications and SQL logs, the price/MBPS ratio is much greater than the corresponding performance ratio when moving from HDDs to SSDs.
2.       SSDs are best suited for applications that use random data accesses and small payload sizes.
Drive Recommendations for Each Application Workload I/O Profile:

Setup Performance Monitor to always collect Server performance statistics

How can we setup Performance Monitor to collect data all of the time, so we can go back and review the data when needed?
1.  Click Start, click Run, and then type perfmon.msc
2.  Click the plus sign next to "Performance Logs and Alerts" to expand
3.  Click "Counter Logs
4.  On the menu select Action -> New Log Settings and enter a name that makes sense to you.
5.  Now it will open a new window.
6.  There are three areas that you have to setup:

-      Counters to monitor: what are the areas that you want to monitor
-      Log file type: format for storing the data
-      Schedule: how often you want to track and store the collected data

7.  Setting counters: Click Add Counters to choose individual counters or if you wish you can choose Add Objects.  One thing to note is that if you select Add Objects you may add counters that you never need and it will not only waste storage, but also resources from the system.

8.  If you are not sure what information the counter will give you, you can click "Explain", it will open another window and offer an explanation of the counter that you chose.

9.  Be aware that if you collect data every 15 seconds or less it may put too much pressure on your server and also waste storage as well. Depending on what you do, you may want to choose to collect every 1 min.

10.   If you are collecting to the server locally, you do not need to set "Run As:" but if you are collecting data from a remote server, then you need to set the service account, which has proper permissions to collect the data.. The service account needs to have at least "Profiler system performance" rights from Local Security Settings. For more information, you can read this article: http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/gp/551.mspx?mfr=true

11.   Setting Log Files type: This will decide how the data will be stored once it is collected.  You can use Text file (Comma delimited) and chose End file names with: yyyymmdd.

12.   And then choose Configure.

13.   strongly recommend changing Locations to a drive other than the C drive

14.   setting Schedule: Start Log At: 12:00:00 AM on following day and Stop Log After 1 days and choose Start a new log file

15.   This will create one file each day with a filename like D:\PerfLogs\PerfMon_20080502.csv

16.   Click OK to finish this step.

17.   Before you setup the schedule, you can first run it manually by clicking the run icon.  This will let you make sure you are collecting the correct data before scheduling it to run. For the test, check to make sure it created the file correctly.

18.   Once you are done testing, go back to the Schedule section and reset the schedule correctly once again especially start a new file section.

19.   Also setup a script to delete old files, so you don't run out of space on your file system.

20.   To look at the data using Excel, just double click on the CSV file that was created to see if you are collecting the counters correctly.

One thing to remember is that there is not exact threshold for a lot of the counters to determine if there is an issue or not, so it is very important to collect performance counter data while the server is running healthy, for a baseline to measure against.