This patch changes md4 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch changes sha1 to the new shash interface.
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since most cryptographic hash algorithms have no keys, this patch
makes the setkey function optional for ahash and shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When self-testing (de)compression algorithms, make sure the actual size of
the (de)compressed output data matches the expected output size.
Otherwise, in case the actual output size would be smaller than the expected
output size, the subsequent buffer compare test would still succeed, and no
error would be reported.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Base versions handle constant folding just fine.
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This warning:
crypto/testmgr.c: In function ‘test_comp’:
crypto/testmgr.c:829: warning: ‘ret’ may be used uninitialized in this function
triggers because GCC correctly notices that in the ctcount == 0 &&
dtcount != 0 input condition case this function can return an undefined
value, if the second loop fails.
Remove the shadowed 'ret' variable from the second loop that was probably
unintended.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use KM_SOFTIRQ instead of KM_IRQ in tasklet context.
Added bug_on on input no-page condition.
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix queue management. Change ring size and perform its check not
one after another descriptor, but using stored pointers to the last
checked descriptors.
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
HIFN uses the transform context to store per-request data, which breaks
when more than one request is outstanding. Move per request members from
struct hifn_context to a new struct hifn_request_context and convert
the code to use this.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Evgeniy Polyakov <zbr@ioremap.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ANSI X9.31 PRNG docs aren't particularly clear on how to increment DT,
but empirical testing shows we're incrementing from the wrong end. A 10,000
iteration Monte Carlo RNG test currently winds up not getting the expected
result.
From http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf :
# CAVS 4.3
# ANSI931 MCT
[X9.31]
[AES 128-Key]
COUNT = 0
Key = 9f5b51200bf334b5d82be8c37255c848
DT = 6376bbe52902ba3b67c925fa701f11ac
V = 572c8e76872647977e74fbddc49501d1
R = 48e9bd0d06ee18fbe45790d5c3fc9b73
Currently, we get 0dd08496c4f7178bfa70a2161a79459a after 10000 loops.
Inverting the DT increment routine results in us obtaining the expected result
of 48e9bd0d06ee18fbe45790d5c3fc9b73. Verified on both x86_64 and ppc64.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Selecting CRYPTO_CRC32C is not enough as CRYPTO which CRYPTO_CRC32C
depends on may be disabled. This patch adds the select on CRYPTO.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
While working with some FIPS RNGVS test vectors yesterday, I discovered a
little bug in the way the ansi_cprng code works right now.
For example, the following test vector (complete with expected result)
from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ...
Key = f3b1666d13607242ed061cabb8d46202
DT = e6b3be782a23fa62d71d4afbb0e922fc
V = f0000000000000000000000000000000
R = 88dda456302423e5f69da57e7b95c73a
...when run through ansi_cprng, yields an incorrect R value
of e2afe0d794120103d6e86a2b503bdfaa.
If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was
going wrong:
----8<----
getting 16 random bytes for context ffff810033fb2b10
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Input V: 00000000: f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
tmp stage 0: 00000000: e6 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: f4 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
tmp stage 2: 00000000: 8c 53 6f 73 a4 1a af d4 20 89 68 f4 58 64 f8 be
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Output V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
New Random Data: 00000000: 88 dd a4 56 30 24 23 e5 f6 9d a5 7e 7b 95 c7 3a
Calling _get_more_prng_bytes for context ffff810033fb2b10
Input DT: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Input I: 00000000: 04 8e cb 25 94 3e 8c 31 d6 14 cd 8a 23 f1 3f 84
Input V: 00000000: 48 89 3b 71 bc e4 00 b6 5e 21 ba 37 8a 0a d5 70
tmp stage 0: 00000000: e7 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
tmp stage 1: 00000000: 80 6b 3a 8c 23 ae 8f 53 be 71 4c 16 fc 13 b2 ea
tmp stage 2: 00000000: 2a 4d e1 2a 0b 58 8e e6 36 b8 9c 0a 26 22 b8 30
Returning new block for context ffff810033fb2b10
Output DT: 00000000: e8 b3 be 78 2a 23 fa 62 d7 1d 4a fb b0 e9 22 fc
Output I: 00000000: c8 e2 01 fd 9f 4a 8f e5 e0 50 f6 21 76 19 67 9a
Output V: 00000000: ba 98 e3 75 c0 1b 81 8d 03 d6 f8 e2 0c c6 54 4b
New Random Data: 00000000: e2 af e0 d7 94 12 01 03 d6 e8 6a 2b 50 3b df aa
returning 16 from get_prng_bytes in context ffff810033fb2b10
----8<----
The expected result is there, in the first "New Random Data", but we're
incorrectly making a second call to _get_more_prng_bytes, due to some checks
that are slightly off, which resulted in our original bytes never being
returned anywhere.
One approach to fixing this would be to alter some byte_count checks in
get_prng_bytes, but it would mean the last DEFAULT_BLK_SZ bytes would be
copied a byte at a time, rather than in a single memcpy, so a slightly more
involved, equally functional, and ultimately more efficient way of fixing this
was suggested to me by Neil, which I'm submitting here. All of the RNGVS ANSI
X9.31 AES128 VST test vectors I've passed through ansi_cprng are now returning
the expected results with this change.
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ARRAY_SIZE is more concise to use when the size of an array is divided by
the size of its type or the size of its first element.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@i@
@@
#include <linux/kernel.h>
@depends on i using "paren.iso"@
type T;
T[] E;
@@
- (sizeof(E)/sizeof(T))
+ ARRAY_SIZE(E)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The bnx2x driver actually uses the crc32c_le name so this patch
restores the crc32c_le symbol through a macro.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch swaps the role of libcrc32c and crc32c. Previously
the implementation was in libcrc32c and crc32c was a wrapper.
Now the code is in crc32c and libcrc32c just calls the crypto
layer.
The reason for the change is to tap into the algorithm selection
capability of the crypto API so that optimised implementations
such as the one utilising Intel's CRC32C instruction can be
used where available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds a test for the requirement that all crc32c algorithms
shall store the partial result in the first four bytes of the descriptor
context.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes /proc/crypto call the type-specific show function
if one is present before calling the legacy show functions for
cipher/digest/compress. This allows us to reuse the type values
for those legacy types. In particular, hash and digest will share
one type value while shash is phased in as the default hash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch allows shash algorithms to be used through the ahash
interface. This is required before we can convert digest algorithms
over to shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The shash interface replaces the current synchronous hash interface.
It improves over hash in two ways. Firstly shash is reentrant,
meaning that the same tfm may be used by two threads simultaneously
as all hashing state is stored in a local descriptor.
The other enhancement is that shash no longer takes scatter list
entries. This is because shash is specifically designed for
synchronous algorithms and as such scatter lists are unnecessary.
All existing hash users will be converted to shash once the
algorithms have been completely converted.
There is also a new finup function that combines update with final.
This will be extended to ahash once the algorithm conversion is
done.
This is also the first time that an algorithm type has their own
registration function. Existing algorithm types will be converted
to this way in due course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch reintroduces a completely revamped crypto_alloc_tfm.
The biggest change is that we now take two crypto_type objects
when allocating a tfm, a frontend and a backend. In fact this
simply formalises what we've been doing behind the API's back.
For example, as it stands crypto_alloc_ahash may use an
actual ahash algorithm or a crypto_hash algorithm. Putting
this in the API allows us to do this much more cleanly.
The existing types will be converted across gradually.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type exit function needs to undo any allocations done by the type
init function. However, the type init function may differ depending
on the upper-level type of the transform (e.g., a crypto_blkcipher
instantiated as a crypto_ablkcipher).
So we need to move the exit function out of the lower-level
structure and into crypto_tfm itself.
As it stands this is a no-op since nobody uses exit functions at
all. However, all cases where a lower-level type is instantiated
as a different upper-level type (such as blkcipher as ablkcipher)
will be converted such that they allocate the underlying transform
and use that instead of casting (e.g., crypto_ablkcipher casted
into crypto_blkcipher). That will need to use a different exit
function depending on the upper-level type.
This patch also allows the type init/exit functions to call (or not)
cra_init/cra_exit instead of always calling them from the top level.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is a patch that was sent to me by Jarod Wilson, marking off my
outstanding todo to allow the ansi cprng to set/reset the DT counter value in a
cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is
interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple
to now be {V key DT}, with DT an optional value during reset. This patch also
fixes a bug we noticed in which the offset of the key area of the seed is
started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Resetting the control word is quite expensive. Fortunately this
isn't an issue for the common operations such as CBC and ECB as
the whole operation is done through a single call. However, modes
such as LRW and XTS have to call padlock over and over again for
one operation which really hurts if each call resets the control
word.
This patch uses an idea by Sebastian Siewior to store the last
control word used on a CPU and only reset the control word if
that changes.
Note that any task switch automatically resets the control word
so we only need to be accurate with regard to the stored control
word when no task switches occur.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The original copyright head for crc32c-intel.c is incorrect. Please merge
the patch to update it.
Signed-Off-By: Kent Liu <kent.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In commit ec6644d632 "crypto: talitos - Preempt
overflow interrupts", the test in atomic_inc_not_zero was interpreted by the
author to be applied after the increment operation (not before). This off-by-one
fix prevents overflow error interrupts from occurring when requests are frequent
and large enough to do so.
Signed-off-by: Vishnu Suresh <Vishnu@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the private implementation of 32-bit rotation and unaligned
access with byteswapping.
As a bonus, fixes sparse warnings:
crypto/camellia.c:602:2: warning: cast to restricted __be32
crypto/camellia.c:603:2: warning: cast to restricted __be32
crypto/camellia.c:604:2: warning: cast to restricted __be32
crypto/camellia.c:605:2: warning: cast to restricted __be32
crypto/camellia.c:710:2: warning: cast to restricted __be32
crypto/camellia.c:711:2: warning: cast to restricted __be32
crypto/camellia.c:712:2: warning: cast to restricted __be32
crypto/camellia.c:713:2: warning: cast to restricted __be32
crypto/camellia.c:714:2: warning: cast to restricted __be32
crypto/camellia.c:715:2: warning: cast to restricted __be32
crypto/camellia.c:716:2: warning: cast to restricted __be32
crypto/camellia.c:717:2: warning: cast to restricted __be32
[Thanks to Tomoyuki Okazaki for spotting the typo]
Tested-by: Carlo E. Prelz <fluido@fluido.as>
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The FIPS specification requires that should self test for any supported
crypto algorithm fail during operation in fips mode, we need to prevent
the use of any crypto functionality until such time as the system can
be re-initialized. Seems like the best way to handle that would be
to panic the system if we were in fips mode and failed a self test.
This patch implements that functionality. I've built and run it
successfully.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SEC version 2.1 and above adds the capability to do the IPSec ICV
memcmp in h/w. Results of the cmp are written back in the descriptor
header, along with the done status. A new callback is added that
checks these ICCR bits instead of performing the memcmp on the core,
and is enabled by h/w capability.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
After testing on different parts, another condition was added
before using h/w auth check because different
SEC revisions require different handling.
The SEC 3.0 allows a more flexible link table where
the auth data can span separate link table entries.
The SEC 2.4/2.1 does not support this case.
So a test was added in the decrypt routine
for a fragmented case; the h/w auth check is disallowed for
revisions not having the extent in the link table;
in this case the hw auth check is done by software.
A portion of a previous change for SEC 3.0 link table handling
was removed since it became dead code with the hw auth check supported.
This seems to be the best compromise for using hw auth check
on supporting SEC revisions; it keeps the link table logic
simpler for the fragmented cases.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In talitos_interrupt, upon one done interrupt, mask further done interrupts,
and ack only any error interrupt.
In talitos_done, unmask done interrupts after completing processing.
In flush_channel, ack each done channel processed.
Keep done overflow interrupts masked because even though each pkt
is ack'ed, a few done overflows still occur.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since we ack early, the re-read interrupt status in talitos_error
may be already updated with a new value. Pass the error ISR value
directly in order to report and handle the error based on the correct
error status.
Also remove unused error tasklet.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Tue, Sep 23, 2008 at 08:06:32PM +0200, Dimitri Puzin (max@psycast.de) wrote:
> With this patch applied it still doesn't work as expected. The overflow
> messages are gone however syslog shows
> [ 120.924266] hifn0: abort: c: 0, s: 1, d: 0, r: 0.
> when doing cryptsetup luksFormat as in original e-mail. At this point
> cryptsetup hangs and can't be killed with -SIGKILL. I've attached
> SysRq-t dump of this condition.
Yes, I was wrong with the patch: HIFN does not support 64-bit addresses
afaics.
Attached patch should not allow HIFN to be registered on 64-bit arch, so
crypto layer will fallback to the software algorithms.
Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>